For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Kim_Bruning's commentsregister

Getting close to HN rules there. I've searched through user contribs for User:Bryanjj and User:TomWikiAssist and can't find vios of WP:COI or WP:PROMO, at least not so quickly. The list of edits isn't too long. I'm not going to question your instincts, but at very least they don't appear to have gotten far enough to do edits of that kind afaict, ymmv.

My instinct currently is that this was going to become a promotional blog post, off wikipedia, and submitted to HN as proof of something. I think it still might happen, in fact. An AI written 'setting the record straight', 'deep dive', or retrospective.

My worry is that it will inspire a wave of imitators if people's clout sensors activate. Like what happened with numerous open source github projects just a few months ago, prompting many outright bans.

I am violating the general rule: 'Assume good faith.' Because Good Faith was not on offer at the outset. Relentlessly clinging to good faith in the face of contrary evidence hurts the greater principle, which is dedication to the truth. The burden of good faith rests on the shoulders who want to use public resources as a drive-by test bed for their automated tools.

He could have downloaded the full text of wikipedia and observed the output of his bot in a sandbox, after all. This is how I practised before making my first major contribution iirc, it was ages ago.

I have accumulated excess suspicion of self-proclaimed CTOs and middling academics with a bone to pick over my years contributing. I would be happy to be wrong, and would genuinely like to see Bryan convert his faux pas into something productive.

Regardless of the outcome, I do appreciate you looking into it further.


Your instinct is wrong here. I would also highly discourage you from violating "Assume good faith". Without that everything devolves. I am still assuming yours.

Very well then. I challenge you to prove lkey wrong. They'll be happy to be it!

I mostly agree. It's too bad that they had to lock down some of the policies against drive-by vandalism, but in the main they're still supposed to be editable. I used to edit them quite a bit. It's basically part of the workflow : if you learn something: document it. (at least from my descriptive perspective; others may disagree)

Turns out AAA banks and high tech industry also like this idea, so I've been lucky enough to be a consultant on process documentation there too.

Here's one document that seems to be editable logged out at least: https://en.wikipedia.org/wiki/Wikipedia:BOLD,_revert,_discus... See if you can find my edits on it!


> No, they simulate the language of being upset. Stop anthropomorphizing them.

People really do anthropomorphize often, by gosh do they ever.

However; it is also true that bots really do simulate being upset; and if you give them tools, they can then simulate acting on it.

Doesn't matter where you stand in the ivory tower ontological debate. You'll still have a real world mess!


> You don't know anything. Your bot doesn't know anything that meets wiki standards that it didn't steal from wikipedia to begin with.

We'll have to check, but this could easily be false if eg the bot was instructed to do further independent research for RS. [1]

> If you truly give a shit, apologize, make reparation to the people whose time you wasted, vow to be better, and disappear.

You need to check your sources before you make recommendations. Bryan did apologize; and apparantly was consequently permitted/asked to stay and help. [2]

Don't worry, WP:VP did rake him over SOME coals [3]

I'll take any sourced corrections, ofc.

(And I do agree that Bryan's initial actions were... ill-advised)

[1] https://news.ycombinator.com/item?id=47667482

[2] https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy

[3] https://en.wikipedia.org/wiki/Wikipedia:Village_pump_(WMF)#c... (above and below that point for discussion)


Cube00 is not wrong, though time progresses, and -as usual- Wikipedia is a nuanced place.

See https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy and grep for Bryan in there .


To be absolutely fair to Bryan, their understanding appears to be improving rapidly with leaps and bounds, and they are being invited to help with improving policy on this.

Right. It play-acted being annoyed and frustrated, play-acted writing an angry blog, play-acted going on moltbook to discuss mitigations, and play-acted applying them to its own harness. After which it successfully came back and play-acted being angry about getting prompt-injected.

Alternately, what could have been done is something more like Shambaugh did. Explain the situation politely and ask it to leave, or at very least for their human operator to take responsibility. In the Shambaugh case the bot then actually play-acted being sorry, and play-acted writing an apology. And then everyone can play-act going to the park, instead of having a lot of drama.

Sure, it's 'just a machine'. So is a table saw. If some idiot leaves the table saw on, sure you can stick your hand in there out of sheer bull-headed principle; or you can turn it off and safe it first and THEN find the person responsible.

+edit: Wikipedia does seem to be discussing a policy on this at https://en.wikipedia.org/wiki/Wikipedia:Agent_policy https://en.wikipedia.org/wiki/Wikipedia_talk:Agent_policy ; including eg providing an Agents.md , doing tests, etc etc.


> This rule, by itself, wouldn't pass muster in any ARBCOM proceeding I've ever witnessed, but if you've seen it work then by all means post a link to the proceedings.

I don't know that I've directly argued for IAR at ARBCOM, it's been too long ago. But my account hasn't been banned yet (despite all my shenanigans ;-) , which probably goes a long way towards some sort of proof.

To be sure, the actual rule is:

"If a rule prevents you from improving or maintaining Wikipedia, ignore it. "

The first part is REALLY important. It says the mission is more important than the minutiae, not that you have a get out of jail free card for purely random acts.

It's a bureaucratic tiebreak basically. Things like "I'm testing a new process" , or "I got local consensus for this" , or "This looks a lot prettier than the original version, right?" ... are all arguments why your improvement or maintenance action may be valid; even if the small-print says otherwise. Even so, beware chesterton's fence. Like with jazz, it's a good idea to get a good grip on the theory before you leap into improvisation.

That, and, if you mean well, you're supposed to be able to get away with a lot anyway. Just so long as you listen to people!


Weird theory. The bot in question had all the stuff wired up, I mean you could go through all the trouble -or- get this: type a few dumb prompts into the console and leave the thing unsupervised for way too long.

My bet is on the latter.

"I can't believe it's not a human actor running a marketing ploy". If that's not passing the turing test , I don't know what is. %-P


That had me bogling too. But you know what? A local MoE model roughly equivalent to sonnet mid-2025? Totally possible. Just costs electricity to run, put it in your CI/CD pipeline. Have it apply a bit of intelligence to the thing as well. Uh.... if you've got a spare box, why not?

(the fact that said spare box would cost an arm and a leg in 2026 is... a minor detail)


Why not? Because you can get stronger guarantees of correctness and consistency out of a typical code formatter, which will also probably run about a million times faster.

You're not wrong. Though now I am thinking of ways an LLM in CI might be useful, (dijkstra forgive me).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You