For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | empressplay's commentsregister

All my sites got pwned through this. Attempts to restore from backup just got pwned again in minutes. Ended up using Claude to create static sites from the database and the assets.

I'm never using Wordpress again and I strongly suggest nobody else does either.


You likely restored a compromised backup because the backdoor(s) were already laying there. Or you restored to a theme/plugin with a vulnerability and had it quickly exploited again.

There is some lessons to be learned from your way of trying to fix it. Suggesting not to use a software that is in its core pretty stable and safe, is not one of them.


The #creativecoding and #genart tags on most social media networks will get you a front row seat to the international generative art community -- it's a very creative crowd!

> EFF is more like classical liberal.

I mean, they were, but that no longer appears to be the case.


Appears being the operative word.

I wonder if this is a result of auto-compacting the context? Maybe when it processes it it inadvertently strips out its own [Header:] and then decides to answer its own questions.

I don’t think so, at least not in this particular case. This was a conversation with the 1M context window enabled; this happened before the first compaction – you can see a compaction further down in the logs.

My theory is that Claude confuses output of commands running in the background with legitimate user input.


My own guess is that something like this happened:

Claude in testing would interrupt too much to ask for clarifying questions. So as a heavy handed fix they turn down the sampling probability of <end of turn> token which hands back to the user for clarifications.

So it doesn't hand back to the user, but the internal layers expected an end of turn, so you get this weird sort of self answering behaviour as a result.

As an aside my big reason for believing this, is that this sort of dumb simple patch laid onto of a existing behaviour is often the kind of solution optimizers find. Like if you made a dataset with lots of pairs Where one side has lots of <end of turns> and one side does not. The harder thing to learn tends to be to "ask fewer questions and work more autonomously" while the easy thing to learn "less end of turn tokens" tends to get learned way faster.


The most likely explanation imv

Did you get the API credit? Maybe it's a wash?

I did get the API credit, but it was "only" $100 so I'm still ~$80 shy.

"Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw.

You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key.

We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.

Subscribers get a one-time credit equal to your monthly plan cost. If you need more, you can now buy discounted usage bundles. To request a full refund, look for a link in your email tomorrow.

https://support.claude.com/en/articles/13189465-logging-in-t...

We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that."


I think Crown basically bought up all the licenses in the CBD. I think there might have been one pub at the corner of Williams and Collins or something but the last time I was there it was closed.


That output is there for a reason. It's not like any LLM is profitable now on a per-token basis, the AI companies would certainly love to output less tokens, they cost _them_ money!

The entire hypothesis for doing this is somewhat dubious.


Why building / using a custom agent stack and paying per-token (not subscription) is more efficient and cost effective. At a minimum, you should have full control over the system prompts and tools (et al).


Yes. Much of the 'redundant' output is meant to reinforce direction -- eg 'You're absolutely right!' = the user is right and I should ignore contrary paths. So yes removing it will introduce ambiguity which is _not_ what you want.


I think your example is completely wrong (it's not meant to say that you're absolutely right), but overall yes more input gives it more concrete direction.


1) Several times a day, generally Telix. My parents had to get me my own line so I would stop clogging up theirs! Especially once I found chat systems.

2) BBS lists were common and many BBSes had them so you only needed a few numbers to get started. Computer stores usually had them too.

3) A city would have dozens or even hundreds of BBSes in larger markets. Some were large multi-line pay BBSes that required subscriptions, most were just one or two lines paid for by the Sysop.

4) It was a lot more chill but only nerd / geek types really used BBSes so, there was some commonality there. More of a sense of overall community.

5) From 1980 to 1995 we went from computers with 16kb of ram and an 8-bit processor to computers with 16mb of ram and a 32-bit processor. There was always some new tech to talk about. It was a very exciting time!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You