For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | Roshan_Roy's commentsregister

I wonder if the deeper issue isn’t just “AI is too agreeable”, but that most advice (AI or human) doesn’t actually translate into action. A lot of people aren’t really looking for accurate feedback, they’re looking for something that feels coherent enough to sit with. Reddit gives extreme answers, AI gives agreeable ones, but in both cases the outcome is often the same: no real change in behavior. That might be why this feels worse with AI, it removes the friction you’d normally get from another human pushing back.


I think the article works better as a mental model than a literal claim. “Linux is an interpreter” feels wrong if you define interpretation strictly at the CPU instruction level, but it becomes more reasonable if you look at the kernel as something that interprets executable formats and environments (ELF, scripts with shebangs, initramfs, etc.). In that sense it’s less about instruction-by-instruction interpretation and more about orchestrating how different representations of programs become runnable. Maybe the confusion here is mixing those two meanings of “interpreter”.


This feels like one of those “because we can” projects that accidentally reveals where the platform is going.

CSS started as purely declarative styling, but between things like conditionals, math functions, and now these rendering tricks, it’s slowly creeping into “programmable system” territory. Not because it’s the right tool for it, but because browsers are becoming the real runtime. The interesting question isn’t “can Doom run in CSS”, it’s how much logic we’ll keep pushing into layers that were never meant to handle it.


My first thought along these lines was "do I now need a NoCSS plugin along with NoScript"

at what point is CSS powerful enough to become a malware vector.


The question is really about where the boundary between presentation (CSS) and interactivity (JavaScript) lies.

For static content like documents the distinction is easy to determine. When you think about applications, widgets, and other interactive elements the line starts to blur.

Before things like flex layout, positioning content with a 100% height was hard, resulting in JavaScript being used for layout and positioning.

Positioning a dropdown menu, tooltip, or other content required JavaScript. Now you can specify the anchor position of the element via CSS properties. Determining which anchor position to use also required JavaScript, but with things like if() can now be done directly in CSS.

Implementing disclosure elements had to be done with a mix of JavaScript and CSS. Now you can use the details/summary elements and CSS to style the open/close states.

Animation effects when opening an element, on hover, etc. such as easing in colour transitions can easily be done in CSS now. Plus, with the reduced motion media query you can gate those effects to that user preference in CSS.


The design of CSS has always been weak IMO. What we needed were general, simple primitives that can describe layout relationships and a compositional layer that includes some common defaults.


It's abstraction inversion at its finest. Declarative styling sounded like a good idea, but it jumped the shark long ago. It's begging to become a real programming language but for some reason the design ethos behind CSS seems to have been avoiding programming at all costs--maybe that's to keep the browser renderer in control (and hopefully responsive), maybe it's because they didn't want designers to have to learn to program, maybe they just hate JS. Whatever the reason, it's clear that CSS took a wrong turn and mutated into absolutely the wrong abstraction.


LLM generated comment


This is a really interesting setup — especially the split between the public and private agents. curious about the IRC choice: was that mainly for simplicity and reliability, or did you find advantages over something like a lightweight HTTP/WebSocket layer? Also, how are you handling state between the two agents, is it mostly stateless requests over A2A, or do you maintain some shared context?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You