For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | uxcolumbo's commentsregister

I don't know much about the factors that determine why one AI coding harness is better than others. Is it system prompts? Or just personal preference in terms of the UX and there isn't actually a better output between using CC or Pi?

So what makes Pi better than CC? Is it better than OpenCode?


My experience with harnesses is entirely about UX, personally. You could just use an LLM directly and pipe its output directly into your source files, but that would produce terrible results in practice. Harnesses / agents are just better versions of “curl https://llm.com > source.{py/js/cpp/etc}”, imo

Long term I’m bullish on an open source harness “winning” the foot race, in a similar way that Linux “won” over Windows and MacOS (that is, debatably)


There's a recentish YouTube talk when he introduces the concept and contrasts against those.

My (oversimplified) summary: it's like vim versus an IDE. Good for tinkerers and obsessives who like small, sharp and customisable tools.


> vim versus an IDE is exactly how I describe it to some of my coworkers who are old enough to have used vim.

CC is a steaming pile of vibe-coded bloat. If that rocks your boat, go knock yourself out.

Is it better than OpenCode? It's certainly much smaller and doesn't have a client-server architecture - already that is a big win.


You mention the client-server architecture of opencode. Is that a local server or is it calling home to opencode servers?

One of the ideas I like about opencode is the ability to prompt and such from a web browser. So I'm curious if that is the client-server architecture you are talking about, or if it's something else.

For reference, I used replit for some vibe-ish coding for a little bit and really liked that I could easily prompt and view output on my phone when hanging out away from my computer. Or while waiting at the airport for example.

(RIP OG replit by the way. They've pretty much completely pivoted from a REPL to AI, which is pretty hilarious to me given the company name xD)


> One of the ideas I like about opencode is the ability to prompt and such from a web browser. So I'm curious if that is the client-server architecture you are talking about, or if it's something else.

Yes, this is what I meant. And yes it's ok that you like this about opencode :)

> For reference, I used replit for some vibe-ish coding for a little bit and really liked that I could easily prompt and view output on my phone when hanging out away from my computer. Or while waiting at the airport for example.

I use Google Jules and also appreciate being able to nudge it forward when not at the computer. In general I often appreciate when things run on other people's machines. However, if I'm to run a thing on my machines, it better be minimalist!


OpenCode is pretty terrible imo. Not very privacy minded.

I've been considering playing around with opencode. Can you expand on what you mean by this?

The harness or the tool is ok but all the defaults as part of the paid pieces of the tool have really bad privacy decisions. So they offer Zen as a pay as you use credit system with access to the models they think work best with the harness. Their own stealth model in it along with a number of the leading new models are always-on sharing data for training purposes. They don’t make this immediately obvious either you have to click through links on their website to see the breakdown.

I am not usually super privacy minded but if you already made it nonobvious this is happening I don’t really trust the underlying tool.

https://opencode.ai/docs/zen/#privacy

Above is the link. The front page says your privacy is important and says they don’t do training on your data except for the following exceptions which links to this page. Then even their own model is training on your data except there is no opt out. So if you pay for zen and you select one of these models in the tool you have no clue it’s auto training on your data.


Why is it intentional?

That was what the link indicated.

The user sent in a help ticket, and Bunny confirmed this response rewrite was intentional and would not fix it.

I wanted to get this out, not to conjecture as to why.


Your blog is a treasure trove - thanks for sharing.

Do you still cut your own hair ;) ?

But yes us folks in the creative world can learn a few things from the corporate world when it comes to contracts and payment schedules. Mike Monteiro's talk 'F*ck you, pay me' comes to mind.

---

https://www.mikemonteiro.com/

https://creativemornings.com/talks/mike-monteiro--2/1


Thanks - still cut my own hair but rethinking after a particularly disastrous one recently!

How about: LLMs are on a spectrum and this one is on the tiny side?

R.I.P Ben. He was such a positive human being and encouraging you to do great things, even if you doubted yourself.

Here is a little clip of him from Bedroom to Billions: https://www.youtube.com/watch?v=aRsLOUYL3mk


Or get an Canon 5d MKI or MKII. Not many features and great kit and can be bought for less than $500.

Sounds interesting, would like to learn more about this.

How do you imokement the scoring layer and when and how is it invoked?


The scoring layer sits between ingestion and storage. Incoming items get evaluated on a few axes: source reliability (did the agent observe this directly or was it told?), semantic distance from existing memories, and recency weighting for time-sensitive facts.

Contradiction detection runs as a separate step - we embed the incoming memory, similarity-search against existing ones, and score the pair for logical consistency. If it trips a threshold, it gets stored with a conflict flag and a link to the contradicting memory rather than silently overwriting.

The agent sees both during retrieval and reasons about which to trust in context. Sounds like overhead but it's fast — the scoring is a simple feedforward pass, not another LLM call.


Thanks for that. I'm new to the applied AI / ML world.

What's your stack and infra setup? Mainly Python, AWS, Databricks?

-

PS. previous comment typo: 'imokement' should have read 'implement'


This article seems to be by a company who wants to sell you this agent coordination layer.

First question: how do they define AGI? Couldn't see it.

Feels like clickbait tbh.


Which ones would you recommend?


I would love to tell you what it's called but the label is in Korean and I don't understand it.

I will find out for you.


Advertisement is not that effective in general or just for certain channels, i.e. TV?

Also what are you existing to?


So my understanding - from a friend at WPP who told me the same and from a freakonomics episode - is that advertising was wildly oversold before digital.

When the metrics arrived with digital, they saw that advertising, in some ways, was just not as effective as they’d hoped. In some ways the ROI wasn’t there. Seth Godin agrees. He says that advertising in the digital era could be as simple as just having a good product. I think this is Tesla’s position on it - make the best product and the internet takes care of it.

Legacy companies have kept large ad budgets but those are diminishing. From what I spoke with my friend at WPP, he said their data science team showed that outside of a new product or a product that is not recognised by consumers, the actual outcomes from ads are marginal or incremental. Thats what he told me. If your product is already known to consumers, the ROI is questionable.


Advertising's foremost job is to sell the premise of advertising to business management. Selling the business's product is always secondary to that.


Always felt suspicious to me that so much of company dynamics are basically about selling yourself to management...and there's one team in the company who's full-time job is selling? Wonder how that will turn out.


None of my coworkers could figure out why I was laid off, and were shocked because I was important to getting the work done, but management made it clear I hadn't been selling myself to management.


My exit is storytelling. I think that’s the only thing that will remain. I suspect humans will still want to hear stories about and from other humans.

There’s something about AIs that feels wrong for storytelling. I just don’t think people will want AIs to tell them stories. And if they do… Well, I believe in human storytelling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You