For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | wild_egg's commentsregister

I've had my agents using tmux for these use cases for a couple years now. What does TUI-use offer on top?

I've barely been using it lately, mostly leaving it disabled. But the tmux-mcp is pretty solid. https://github.com/nickgnd/tmux-mcp

I wish I was keeping better track of them all but there's a bunch of neat tmux based multi-agent systems. Agent of Empires for example has a ton of code around reading session data out of the various terminal uis. https://github.com/njbrake/agent-of-empires

Ideally imo tui apps also would have accessibility APIs. The structured view of those APIs feels like it would be nice to have. And it would mean that an agent could just use accessibility and hit both gui and tui. For example voxcode recent submission does this on mac for understanding what file is open/line numbers. https://github.com/jensneuse/voxcode https://news.ycombinator.com/item?id=47688582


Incredibly, agent-of-empires has become my daily driver.

The Dumb Zone for Opus has always started at 80-100k tokens. The 1M token window just made the dumb zone bigger. Probably fine if the work isn't complicated but really I never want an Opus session to go much beyond 100k.

Does that apply to subagents?

I keep hearing OpenClaw runs on pi?

EDIT: confused by downvotes. In this thread people are saying it runs on top of `claude -p` and others saying it's on pi.

The `claude -p` option is allowed per https://x.com/i/status/2040207998807908432 so I really don't understand how they're enforcing this.


It runs on pi, not claude -p

That's my understanding too, though i haven't checked it. running claude -p would be horribly inefficient. I would not be surprised if openclaw added some compatibility layer to brute force prompts through claude -p as a workaround. This isn't the first time that openclaw was "banned" from claude subscriptions.

Don't have a GPU so tried the CPU option and got 0.6t/s on my old 2018 laptop using their llama.cpp fork.

Then found out they didn't implement AVX2 for their Q1_0_g128 CPU kernel. Added that and getting ~12t/s which isn't shabby for this old machine.

Cool model.


Are you getting anything besides gibberish out of it? I tried their recommended commandline and it's dog slow even though I built their llama.cpp fork with AVX2 enabled. This is what I get:

    $ ./build/bin/llama-cli     -hf prism-ml/Bonsai-8B-gguf -p "Explain quantum computing in simple terms." -n 256 --temp 0.5 --top-p 0.85 --top-k 20 -ngl 99
    > Explain quantum computing in simple terms.

     \( ,

      None ( no for the. (,./. all.2... the                                                                                                                                ..... by/

EDIT: It runs fine in their collab notebook. Looking at that you have to do: git checkout prism (in the llama.cpp repo) before you build. That's a missing instruction if you're going straight to their fork of llama.cpp. Works fine now.

UPDATE: I was using the llama.cpp CPU backend and was still getting gibberish. On Google colab they're running with CUDA. I turned Claude loose on the problem and it discovered a problem in the llama.cpp CPU backend code where a float was being converted to an int and basically going to 0. Now it runs fine locally with the CPU backend.

Mind sharing the fix as a patch? I would like to run it this way, too.


"Not shabby" is a big understatement.

Why so?

Because it's the opposite of shabby

What are the reasons?

Virtual memory doesn't matter at all. It's virtual. You can take 2TB of address space, use 5MB of it, and nothing on the system cares.

Have a read through everything that's needed for a full uninstall: https://gist.github.com/banteg/1a539b88b3c8945cd71e4b958f319...

Minimalist alternative with no hooks or dependencies for the curious: https://github.com/wedow/ticket


Ticket looks great, thanks!


I think we're a long ways from that.

But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't. When you know the lower pieces, your mental model tells you when and where the higher level pieces are likely to break. Legit superpower.


> But with that said, those who learn the underlying mechanisms will always be able to solve more problems than the folks who don't

I would define that as being "seriously hamstrung"


seems similar to a couple of simonw's recent tools?

https://simonwillison.net/2026/Feb/10/showboat-and-rodney/


Simon’s tools are really great. Showboat is more for static screenshots though. ProofShot is the full session: recording, error capture, action timeline, PR upload. Different scope i'd say.


Factories benefit from economies of scale that favour centralization.

I think smaller groups handling more complexity is on point. But that's because each group will build their own bespoke factory catered to their exact needs.

I very fully expect a mass proliferation of custom programs rather than standardizing on a common set that groans under the weight of being so general to support all use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You