For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more dbreunig's commentsregister

Yeah, I agree. Almost mentioned in the post how I imagine an ad PM at OpenAI is jealous of an ad PM at Perplexity.


I also dislike the term. It feels concocted to evoke “tacticool” vibes.

Unless you’re pushing new firmware onto a drone in Ukraine, FDE is stolen valor.


Might I interest you in "In the trenches" and "war stories"?


Ehh, I don’t think folks are claiming to be active duty or former military personnel, which is the bar for stolen valor accusations in my book. I agree with the sentiment but not with the determination of finding fault. Folks hired for a specific role rarely pick their own job titles.


You should read the post. You might find the “domain” discussion interesting.


That's what I was alluding to, I don't think it defines ai, do you? These pieces seem like classical ML pieces to me plus LLM. Is that ai? Like from a technical standpoint, is it clearly defined?


It’s not clearly defined. Nowadays by default it means generative AI (https://en.wikipedia.org/wiki/Generative_artificial_intellig...).


AI is defined by algorithmic decision making. ML, a subset, is about using pattern matching with statistical uncertainty in that decision making. GenAI uses algorithms of classical ML, including deep learning based on neural networks, to encode the decode input to output, jargonized as a prompt. Whether diffusion or next token prediction, the patterns are learned during ML training.

AI is not totally encapsulated by ML. For example, reinforcement learning is often considered distinct in some AI ontologies. Decision rules and similar methods from the 1970s and 1980s are also included though they highlight the algorithmic approach versus the ML side.

There are certainly many terms used and misused by current marketing (especially the bitcoin bro grifters who saw AI as an out of a bad set of assets), but there actually is clarity to the terms if one considers their origins.


"AI is not totally encapsulated by ML" that's the part I haven't been able to put my fingers on. I understand that it's not encapsulated, ML is not intelligence, it's gradient descent. So what is in that set AI - {ML}?


It's a fun rabbit hole.

Classical ML tasks (e.g. classification, regression ), perception (vision, speech) and pattern recognition, generative AI capabilities (text, image, audio generation), knowledge representation and reasoning (symbolic AI, logic), decision-making and planning (including reinforcement learning for sequential decisions), as well as hybrid approaches (e.g. neuro-symbolic methods, fuzzy logic).

The capability areas outside of classical ML have been overlapped now to a degree by GPT architectures as well as deep learning, but these architectures aren't the whole game.


Yea, I think it's one of those things that I won't understand from the outside looking in. I'm in semiconductor software so I do a lot of classical numerical methods, graph theory, and ML research, like converting obscure ML algorithms heavy on math from academia for our ML teams. I don't think I'll get the technical side of what is now called ai without OJT in it.


I will be thinking about this comment for a bit. Thanks for this perspective!


The team at Chroma is currently looking into this and should have some figures.


I’m wondering that too. I think better routers will allow for more efficiency (a good thing!) at the cost of giving up control.

I think OpenAI attempted to mitigate this shift with the modes and tones they introduced, but there’s always going to be a slice that’s unaddressed. (For example, I’d still use dalle 2 if I could.)


I’m aware of LoRA, Civitai, etc. I don’t think they are “widely known” beyond AI imagery enthusiasts.

Krea wrote a great post, trained the opinions in during post-training (not during LoRA), and I’ve been noticing larger labs doing similar things without discussing it (the default ChatGPT comic strip is one example). So I figured I’d write it up for a more general audience and ask if this is the direction we’ll go for qualitative tasks beyond imagery.

Plus, fine-tuning is called out in the post.


I don't think there is such a thing as a general audience for AI imagery discussion yet, only enthusiasts. The closest thing might be the subset of folks who saw ChatGPT can make an anime version of their photo and tried it out or the large amount of folks that have heard the artist's pushback about the tools in general but not actually used them. They have no clue about any of the nuances discussed in the article though.


AI imagery users are all enthusiasts, there aren't yet casual users in a "wide" general capacity.


Came here to say this: people present MCP’s verbosity as all the context the LLM needs. But almost always, this isn’t the case.

I wrote recently, “ Connecting your model to random MCPs and then giving it a task is like giving someone a drill and teaching them how it works, then asking them to fix your sink. Is the drill relevant in this scenario? If it’s not, why was it given to me? It’s a classic case of context confusion.”

https://www.dbreunig.com/2025/07/30/how-kimi-was-post-traine...


I really like Relay.app for non-coders. People can get wrapped around the wheel with n8n and co.


Wrote about this about a month ago. I think it’s fascinating how they developed these prompts: https://www.dbreunig.com/2025/07/05/cat-facts-cause-context-...


A similar, fun case is where researchers inserted facts about the user (gender, age, sports fandom) and found alignment rules were inconsistently applied: https://www.dbreunig.com/2025/05/21/chatgpt-heard-about-eagl...


If you map LLM/LRMs to Norvig's Model based reflex agents, wouldn't this be expected behavior?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You