> Author here. I did not use AI to write this essay.
Maybe you did. Maybe you didn't. It's your word vs. theirs.
But one thing that is undeniable is that your article reads very much like AI-generated text. While reading it, I couldn't help thinking how ironic it is to write about the virtues of simpler devices using something that is obviously an AI-generated article.
Yeah, this one demonstrates a particularly pernicious view of software development. One where growth, no matter how artificial, is the only sign of success.
If you work with service oriented software, the projects that are "dying" may very well be the most successful if it's a key component. Even from a business perspective having to write less code can also be a sign of success.
I don't know why this was overlooked when the churn metric is right there.
Whenever we initiated a new (internal) SW project, it had to go through an audit. One of the items in the checklist for any dependency was "Must have releases in the last 2 years"
I think the rationale was the risk of security vulnerabilities not being addressed, but still ...
That was my question too. I have plenty of projects I've worked on where they rarely get touched anymore. They don't need new features and nothing is broken.
Sometimes you need to bump a dependency version, adjust the code to a changed API endpoint, or update a schema. Even if the core features stay the same, there's some expected maintenance. I'd still call that being worked on, in a sense that someone has to do it.
Technically you're correct that change frequency doesn't necessarily mean dead, but the number of projects that are receiving very few updates because they're 'done' is a fraction of a fraction of a percent compared to the number that are just plain dead. I'm certain you can use change frequency as a proxy and never be wrong.
That sort of project exists in an ocean of abandoned and dead projects though. For every app that's finished and getting one update every few years there are thousands of projects that are utterly broken and undeployable, or abandoned on Github in an unfinished state, or sitting on someone's HDD never be to touched again. Assuming a low change frequency is a proxy for 'dead' is almost always correct, to the extent that it's a reasonable proxy for dead.
I know people win the lottery every week, but I also believe that buying a lottery ticket is essentially the same as losing. It's the same principle.
With respect, this is a myopic view. Not all software is an "app" or a monolith. If you use a terminal, you are directly using many utilities that by this metric are considered dying or dead.
> it doesn't have to be files. it could be in memory on the browser.
How'd that work? If it's in memory, the extensions would vanish everytime I shutdown Chrome? I'll have to reinstall all my extensions again everytime I restart Chrome?
Have you seen any browser that keeps extension in memory? Where they ask the user to reinstall their extensions everytime they start the browser?
> but the language of "your computer" implies files on your computer, as it would be what people commonly call it. Merely just the extension is not enough.
But the language of "your computer" also implies software on your computer including but not limited to Chrome extensions.
It implies more than just the browser, which is likely why it was used for the post title. If it is exclusively limited to the browser, then "scans your browser" is more correct, and doesn't mislead the reader into thinking something is happening which isn't commonplace on the internet.
> An encouragement to be mindful of language, and therefore discuss what shared context we're trying to build, shouldn't be so controversial in a self-professed 'thoughtful' [0] forum.
I don't understand how HN's news guidelines apply to a blogger writing an article on their own blog. The controversial language was found in the article. It wasn't found in the thread you're replying to.
> Am I reading this right that people can (and do??) use images as a complete replacement for source code files?
Images are not replacements of source code files. Images are used in addition to source code files. Source code is checked in. Images are created and shipped. The image lets you debug things live if you've got to. You can introspect, live debug, live patch and do all the shenanigans. But if you're making fixes, you'd make the changes in source code, check it in, build a new image and ship that.
in smalltalk you make the changes in the image while it is running. the modern process is that you then export the changes into a version control system. originally you only had the image itself. apparently squeak has objects inside that go back to 1977:
https://lists.squeakfoundation.org/archives/list/squeak-dev@...
with originally i meant before the use of version control systems became common and expected. i don't know the actual history here, but i just found this thread that looks promising to contain some interesting details: https://news.ycombinator.com/item?id=15206339 (it is also discussing lisp which bring this subthread back in line with the original topic :-)
that's very interesting, thank you, i should have realized that even early on there had to be a way to share code between images. (and i don't know why i missed that comment before responding myself)
but, doesn't building a new system image involve taking an old/existing image, adding/merging all the changes, and then release new image and sources file from that?
in other words, the image is not recreated from scratch every time and it is more than just a cache.
what is described there is the process of source management in the absence of a proper revision control system. obviously when multiple people work on the same project, somewhere the changes need to be tracked and merged.
but that doesn't change the fact that the changes first happen in an image, and that you could save that image and write out a new sources file.
> There's a reason it has so many stars and most of the people getting something out of it are not posting on X.
That reason is buying stars, agent swarms, and astroturing.
No project gathers 200K stars genuinely in 3 months. There are far more useful and popular projects that need 10 years to get 200K stars. When you see a project like this get 200K stars in just 3 months, you know something is very fishy.
There just aren't enough hobbyists in the world running local AI models, never mind technically savvy enough to hack something like OpenClaw and be really excited about it.
For a comparison, the local image gen interfaces ComfyUI and A1111 WebUI have a huge amount of stars (~100k and 160k respectively, accrued since 2022 or so), but they allow you to create porn customized to whatever kinks you have, not just automate things for the sake of automation. One of those is a rather bigger value prop than the other, dopamine-wise.
Why would they be running local AI models? The creator of OpenClaw explicitly recommends against running OpenClaw using local LLM models at this time, because they're not as powerful as frontier models as well as much more gullible to prompt injection and the like.
> There just aren't enough hobbyists in the world running local AI models, never mind technically savvy enough to hack something like OpenClaw and be really excited about it.
No you dont understand, just because there are X people capable of doing this and my project got (X + YX) stars in 3 months, that only means that my project is very popular and there are no shenanigans occurring _at all_
If you suggest otherwise you are a luddite who doesnt understand and probably hates progress.
React and Linux got their 200K stars slowly but surely over 10 years. OpenClaw got their 200K stars in like 3 months! Is this any meaningful comparison?
Getting 200K stars today doesn't mean much because today stars can be bought. There's a big shady thriving business of selling stars. Stars today can be generated using swarm of thoughtless agents. What's the use of counting these stars when they don't mean anything anymore?
I was looking at the accounts that have starred the OpenClaw project. Many seem relatively old, but I couldn't find more than a handful that seemed to be publicly active in any sense (e.g. making PRs or commits). Same story with the forks. A metric shit-ton ton of forks, no branches or commits or any sign of activity on the forks.
Compared that to the people who have been starring my projects, and every single one of them had some sort of activity on record.
--edit--
Checked again now, seems more recent accounts have some activity. But still, lots of accounts like these
If the domain is being given away for free, it will be used a lot for scams etc, so a lot of systems will just start blocking it immediately. When I got my first domain, I used one of the free TLDs and my university blocked it completely due to it being a scam. Not for any of the content on it, just the TLD being commonly used by scammers
That’s my question. I’ve launched many fresh websites that have not been marked as unsafe by Google. If they were habitually doing this, there would be far more reports of it.
I suspect there is something the author is not telling us.
Even if the false-positive rate is very small (e.g. 0.01%), you probably won't be affected, but more than a hundred thousand of websites would be and that would still be an issue. I have no idea how big is the false-positive rate.
There are many of reports of the same happening to other sites, some of the top ones (you can find many more by searching HN for "google safe browsing"):
The site is already back online after the post. You can check yourself. If I really did have malicious content on the site, this post would have had zero effect on the result.
The domain has no history as far as I could search and the site was up for almost 6 weeks with no issues before it was nuked. I used it with Apple's review process!
The big scary red warning page should at least tell you it’s phishing or malware or something else. OP didn’t have a screenshot of that. You can easily go to a safe browsing test site yourself at testsafebrowsing.appspot.com and find that Google does divulge the category of the blacklisting.
OP says:
> no gore or violence or anything of that sort
That’s not even the right criteria. OP is confused about Google Safe Browsing vs Safe Search.
That sounds like a competitor of yours manually submitting your site to Google for “impersonating” them or something. Anyone can submit URLs to Google to suggest it be blocked: https://safebrowsing.google.com/safebrowsing/report_phish/ Perhaps some overworked underpaid analyst had a lapse of judgement. I’m sorry that this happens to you.
wait, this actually makes things sound even worse because anyone who might not like your product can add it to google and google can sometimes be none the wiser and then add it to phishing link which could then lead to their domains (ie. any TLD's hosted by radix.website) being lost in void essentially unless you have verified the domain in google analytics and even then I would consider this whole situation to be so messy.
At this point, NEVER buy any radix.website TLD domains.
I am seeing pinggy had the same issue with their .online domain and this actually definitely caused hurt to their business https://news.ycombinator.com/item?id=40195410 (I saw this post from their comment in here referencing it)
Maybe you did. Maybe you didn't. It's your word vs. theirs.
But one thing that is undeniable is that your article reads very much like AI-generated text. While reading it, I couldn't help thinking how ironic it is to write about the virtues of simpler devices using something that is obviously an AI-generated article.
The Pangram report doesn't help your case either: https://www.pangram.com/history/f733dac6-a23f-480e-b18a-6794... (100% AI Generated)
reply