For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more DrammBA's commentsregister

Please note that Motion Canvas is also abandoned, the main site is down, and the last commit was Dec 2024.


Afaik it's considered "stable".

There's 2800+ people in the discord.

And the community made a backup of the site https://archive.canvascommons.io/

Update: sounds like the author had some life changes and had to stop his YouTube channel which was the primary motivator for the library- but there are forks popping up (https://github.com/canvas-commons/canvas-commons).

---

This notion that an open source library is "dead" and shouldn't be used because it's not being actively updated is a bit odd. You can fork and fix issues you have. It's got years of many people's work put into it. It's a great library and widely used on YouTube and elsewhere.


Canvas Commons is an active fork of the original repo: https://github.com/canvas-commons/canvas-commons Docs for Canvas Commons: https://canvascommons.io/

Furthermore, the old docs for Motion Canvas can be found here: https://archive.canvascommons.io/


It's incredible that google translate had this moat for a decade (maybe more) including live translation but people prefer to use chatgpt now


Google translate is nowhere close to as good as AI translate.


Google Translate uses Gemini now. You can switch between advanced (AI) and classic modes.


I've checked now, I don't see any selector anywhere in the app or settings. It all looks the usual way and the quality of translation hasn't changed.


Perhaps you are not in the country where this has been rolled out? https://blog.google/products-and-platforms/products/search/g...


Seems so, it's only in U.S., Mexico, and India for now.


The reddit link mentions that they only reported what is now issue #9 and bitwarden has said it's working as intended, so that's why they were "ignored" 4 years ago.


You forgot:

> A personal list for uBlock Origin


opencode-anthropic-auth archived: https://github.com/anomalyco/opencode-anthropic-auth

anthropic legal demanded this pull request be closed: https://github.com/anomalyco/opencode-anthropic-auth/pull/15...


But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster? There's gotta be something else, probably telemetry, maybe hoping people switch to API without fighting, or simply vendor lock-in.


> But wouldn't a less efficient tool simply consume your 5-hour/weekly quota faster?

Maybe.

First, Anthropic is also trying to manage user satisfaction as well as costs. If OpenCode or whatever burns through your limits faster, are you likely to place the blame on OpenCode?

Maybe a good analogy was when DoorDash/GrubHub/Uber Eats/etc signed up restaurants to their system without their permission. When things didn't go well, the customers complained about the restaurants, even though it wasn't their fault, because they chose not to support delivery at scale.

Second, flat-rate pricing, unlike API pricing, is the same for cached vs uncached iirc, so even if total token limits are the same, less caching means higher costs.


> are you likely to place the blame on OpenCode?

am I? Probably, but I get your point that your average user would blame Anthropic instead.

> even if total token limits are the same, less caching means higher costs

Not really, flat-rate pricing simply gives you a fixed token allotment, so less caching means you consume your 5-hour/weekly allotment faster.


> Not really, flat-rate pricing simply gives you a fixed token allotment, so less caching means you consume your 5-hour/weekly allotment faster.

Higher costs for Anthropic, not users. With a tool that caches suboptimally, you cost Anthropic more per token.


Again, subscription gives you a fixed allotment of tokens, doesn't matter if you consume them with claude code or with a 3rd-party tool, both get the same amount of tokens and thus cost Anthropic the same.

In fact it might even be better for Anthropic if people use 3rd-party tools that cache suboptimally because the cache hits don't consume the fixed allotment so claude code users get more of a free ride and thus cost Anthropic more money.


But again, there's other things to consider. People are more likely to blame Anthropic, not OpenCode, when they run out of tokens.


Presumably most people also do not use their full quota when using the official client, whereas third-party clients could be set up to start back up every 5 hours to use 100% of the quota every day and week.

It's the whole "unlimited storage" discussion again.


Terminal scrolling opens a big can of worms for them, I doubt they'll ever implement it. The best you can do is enable scrollbars in opencode so you can quickly jump places.


we are going to implement this


lmao


Still waiting for the "Open" in OpenAI to become more than branding.


I don’t think OpenAI gets enough credit for exposing GPT via an API. If the tech remained only at Google, I’m sure we would see it embedded into many of their products, but wouldn’t have held my breath for a direct API.


Yeah, for all that people make fun of the "Open" in the name their API-first strategy really did make this stuff available to a ton of people. They were the first organization to allow almost anyone to start actively experimenting with what LLMs could do and it had a huge impact.


DeepMind wrote the paper, and while Google's API arrived later than OpenAI's it isn't as late as some people think. The PaLM API was released before the Gemini brand was launched.

Microsoft funded OpenAI and popularized early LLMs a lot with Copilot, which used OpenAI but now supports several backends, and they're working on their own frontier models now.


Google’s AI is not open by definition because their API’s are such a massive pain to use.


>DeepMind wrote the paper

Yeah and it was Open AI that scaled it and initiated the current revolution and actually let people play with it.

> while Google's API arrived later than OpenAI's it isn't as late as some people think.

Google would not launch an API for Palm till 2023, nearly 3 years after Open AI's GPT-3 launch.

Yeah let's not pretend Open AI didn't spearhead the current transformer effort because they did. God knows how far behind we would be if we left things to Google.


They did win back a little bit of their open-ness with the gpt-oss model releases, but I'd like to see updated versions of those.


They are (in my mind) still the best models for fast general taka, when hosted on Groq / Cerebras


It was before GPT3 wasn't it?


Wasn't that introduced in Windows 7?



It certainly exists in my Windows 10.


Sadly the fast startup time is overshadowed by the slow response time of the codex agent


I don't get it. Even in ChatGPT I always use Pro model with maxed-out thinking budget (selectors available only on web and windows).

Let them cook as much as they can.


Looks like there're two main approaches to AI-first development: (i) favor slow responses to produce an upfront high-quality result, (ii) favor quick responses to enable faster response-test-query iteration. And, based on comments read here, seems Codex isn't too fit for the later. Optimally a developer should be able switch between the two approaches depending on problem at hand.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You