For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | kvisner's commentsregister

I see what Martin is saying here, but you could make that argument for moving up the abstraction layers at any point. Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it.

My counter is that technical intent, in the way he is describing it, only exists because we needed to translate human intent into machine language. You can still think deeply about problems without needed to formulate them as domain driven abstractions in code. You could mind map it, or journal about it, or put post-it notes all over the wall. Creating object oriented abstractions isn't magic.


Translating your intent into a formal language is a tool of thought in itself. It’s by that process that you uncover the ambiguities, the aspects and details you didn’t consider, maybe even that the approach as a whole has to be reconsidered. While writing in natural language can also be a tool of thought, there is an essential element in aligning one’s thought process with a formal language that doesn’t allow for any vagueness or ambiguity.

It’s similar to how doing math in natural language without math notation is cumbersome and error-prone.


Agree: house architects have their language (architectural plans) to translate people needs in non ambiguous informations that will be useful for those who build the house. Musician uses musical notes, physician uses schemas to represent molecules, etc... And programmers use programming languages, when we write a line of code we don't hope that the compiler will understand what we write. Musical notes are a kind of abstraction: higher level than audio frequency but lower level than natural language. Same for programming language. Getting rid of all the formal languages take us back 2000 years ago.

Using a formal language also help to enter in a kind of flow. And then details you did not think about before using the formal language may appear. Everything cannot be prompted, just like Alex Honnold prepared his climbing of El Capitan very carefully but it's only when he was on the rock that he took the real decisions. Same for Lindbergh when he crossed the Atlantic. The map is not the territory.


I agree, but that formal language doesn't need to be executable code.

So you need to find something better. In an article "How NASA writes 'perfect' software (1996) (fastcompany.com)" (comments on HN), the author explains that adding GPS support required 1500 pages of spec, and to avoid ambiguity the spec used pseudo code to describe expected features and behaviors.

If you invent a formal language that is easy to read and easy to write, it may look like Python... Then someone will probably write an interpreter.

We have many languages, senior people who know how to use them, who enjoy coding and who don't have a "lack of productivity" problem. I don't feel the need to throw away everything we have to embrace what is supposed to be "the future". And since we need good devs to read and LLM generated code how to remain a good dev if we don't write code anymore ? What's the point of being up to date in language x if we don't write code ? Remaining good at something without doing it is a mystery to me.


Not Python, it looked like Standard ML, and the interpreter is quite good

A formal language is executable. It might need some translation pass to be eventually executable on a particular system, but it is executable nevertheless.

"A sufficiently detailed specification is code"


> you didn't think through how to manipulate the bits on the hardware, you just allowed the interpreter to do it

If you are thinking through deterministic code, you are thinking through the manipulation of bits in hardware. You are just doing it in a language which is easier for humans to understand.

There is a direct mapping of intent.


> If you are thinking through deterministic code, you are thinking through the manipulation of bits in hardware.

No I'm not. If I want the machine to evaluate 2+2, I don't know or care what bits in hardware it uses to do that (as long as it doesn't run out of memory), I just want the result to come back as 4.


When you press the 2 button, the plus button, the 2 button and the equals button, you are translating your question into bits and operations which are logically guaranteed to yield bits that represent your answer.

When you think through what will happen as a result of deterministic code, you are also thinking through what the bits will do, albeit at a higher level of abstraction.

When you ask an LLM to do something, you have no guarantee that the intent you provide is accurately translated, and you have no guarantee you’ll get the result you want. If you want your answer to 2+2 to always be 4, you shouldn’t use a non deterministic LLM. To get that guarantee, the bit manipulation a machine does needs to be logically equivalent to the way you evaluate the question.

That doesn’t mean you can’t minimize intent distortion or cognitive debt while using LLMs, or that you can’t think through the logic of whatever problem you’re dealing with in the same structured way a formal language forces you to while using them. But one of my pet peeves is comparing LLMs to compilers. The nondeterminism of LLMs and lack of logical rigidity makes them fundamentally different.


> When you press the 2 button, the plus button, the 2 button and the equals button, you are translating your question into bits

I am not. 2 is 2, + is +, I don't care whether the machine chooses to represent them as ASCII or EBCDIC or semaphore flags.

> If you want your answer to 2+2 to always be 4, you shouldn’t use a non deterministic LLM.

Right. Determinism and locality are important. But bit-banging isn't (as long as the higher level abstractions are good).


Most programmers who write reasonably deterministic code don't even know how many bits it allocates.

The thing is that you paid that debt once. The mappings are well defined and deterministic.

The whole purpose of an abstractions is to not have to look underneath it to make sure what you did with the abstraction is still correct. You can make sure because you, or someone you trust, did the work of paying that debt once.

With LLMs you always need to verify the output, for every generation you need to pay that debt. So it is not an abstraction.


Exactly, well said.

"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." -- Edsger Dijkstra


> Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it

I agree! You often see this realized when projects slowly migrate to using more and more ctypes code to try and back out of that pit.

In a previous job, a project was spun up using Python because it was easier and the performance requirements weren't understood at that time. A year or two later it had become a bottleneck for tapeout, and when it was rewritten most of the abstract architecture was thrown out with it, since it was all Pythonic in a way that required a different approach in C++


AI is not an abstraction layer.

> you didn't think through how to manipulate the bits on the hardware, you just allowed the interpreter to do it

The interpreter is deterministic but LLMs aren't.


I like the word intent, but Martin Fowler’s essay made me think more carefully about it. When Thomas Kuhn talked about paradigm shifts, “paradigm” ended up carrying more than twenty different meanings. In the same way, I think intent has recently become one of the most polluted and overused words in programming. My own toy language project uses the word intent, so I am not really in a position to criticize others too harshly.

Reading the Hacker News comments, I kept thinking that programming is fundamentally about building mental models, and that the market, in the end, buys my mental model.

If we start from human intent, the chain might look something like this:

human intent -> problem model -> abstraction -> language expression -> compilation -> change in hadrware

But abstraction and language expression are themselves subdivided into many layers. How much of those layers a programmer can afford not to know has a direct effect on that programmer’s position in the market. People often think of abstraction as something clean, but in reality it is incomplete and contextual. In theory it is always clean; in practice it is always breaking down.

Depending on which layer you live in, even when using the same programming language, the form of expression can become radically different. From that point of view, people casually bundle everything together and call it “abstraction” or “intent,” but in reality there is a gap between intent and abstraction, and another gap between abstraction and language expression. Those subtle friction points are not fully reducible.

Seen from that perspective, even if you write a very clear specification, there will always be something that does not reduce neatly. And perhaps the real difference between LLMs and humans lies in how they deal with that residue.

Martin frames the issue in a way that suggests LLM abstractions are bad, but I do not fully agree. As someone from a third-world country in Asia, I have seen a great deal of bad abstraction written in my own language and environment. In that sense, I often feel that LLM-generated code is actually much better than the average abstractions produced by my Asian peers. At the same time, when I look at really good programming from strong Western engineers, I find myself asking again what a good abstraction actually is.

The essay talks about TDD and other methodologies, but personally I think TDD can become one of the worst methodologies when the abstraction itself is broken. If the abstraction is wrong, do the tests really mean anything? I have seen plenty of cases where people kept chasing green tests while gradually destroying the architecture. I have seen this especially in systems involving databases.

The biggest problem with methodology is that it always tends to become dogma, as if it were something that must be obeyed. SOLID principles, for example, do not always need to be followed, but in some organizations they become almost religious doctrine. In UI component design, enforcing LSP too rigidly can actually damage the diversity and flexibility of the UI. In the end, perhaps what we call intent is really the ability to remain flexible in context and search for the best possible solution within that context.

From that angle, intent begins to look a lot like the reward-function-based learning of LLMs.


You are right in that the code (or the formal model) alone isn’t sufficient, in that it doesn’t specify the context, requirements, design goals and design constraints. The formal and the informal level complement each other. But that’s also why it’s necessary to think at both levels when developing software. Withdrawing to just the informal level and letting LLMs handle the mapping to the formal level autonomously doesn’t work.

That being said, even model-based design (MBD) has largely been a failure, despite it being about mapping formal models to (formal-language) program code.


architecture is about the choices you will regret in this future if you get wrong today. You will regret not having testable code so tdd isn't bad - but that is not the whole storyand there are many things you will regret that tdd won't help with.

there is the famious bowling game tdd example where their result doesn't have a frame object and they argue they proved you don't need one. That is wrong though, the example took just a couple hours - there is nothing so bad in a a two hour program you will regret. If you were doing a real bowling system with pin setters, support for 50 lanes and a bunch of other things that I who don't work in that area don't even know about - you will find places to regret things.


in Tidy Code, Kent Beck explains that the main tradeoff os what we can get now vs what will be able to do later. A hacky decision can keep the company afloat, but can reduce the velocity to a snail pace in the future.

It’s easier to keep the balance by keeping everything simple and maintaining a good hygiene in your codebase.


I don't deny that TDD is generally useful. I like it as well.

What I meant is that, like any powerful tool, there are situations where it shouldn't be used.

Thanks for the thoughtful comment.


I've tried a bunch of these "memory" systems and they aren't there yet.

There is, currently, a tension between memory, and context pressure on the coding agent. There have been multiple studies now that tools like codex and claude code get worst at coding as their context window fills up.

As you start adding skills, memory systems, plugins and then a large code base on top of it, I've personally seen the agent start to flounder pretty quickly.

We need a way for agents to pull adhoc memory as they are going along, in the same way we do, rather than trying to front load all the context they might need.


Hi, I work for CodeRabbit.

We have been building agents for code review workflow for nearly two years. Our Code Review Knowledge base today serves more than 3 million repos. We pack more than 40 different points of information into the same LLM context, such as MCP Servers, Rule files, etc. We understand context poisoning/rot and how it creating problems.

We are now bringing the same learning/context engine to SDLC and Slack.

Please give it a try!


I find a lot of these IDEs are simply not as useful as a CLI. When I'm running a full agentic workflow, I don't really need to see the contents of the files at all time, I'd actually say I often don't need to at all, because I can't really understand 10k lines of code per hour.


What role do you play in creating software? If you don't need to see any code, should your employer consider cutting your position? I'm very much pro-humans-in-the-workforce, but I can't understand how someone could be ok with doing so little at their job.


There is a large and growing segment of executives in the software world that is pushing this model hard, like betting their career on it. To them the “dark factory” is an inevitability. As a consequence, not only are developers choosing this path, but the companies they work for are in varying degrees selecting this path for them.


Most, if not all of them, are shooting themselves in the foot. I've been saying this for a long time. The only thing LLMs actually are useful for is automating labor and reducing the amount a worker can demand for their work. Don't fall for this trap.


[flagged]


Then do you think this new data entry position is going to be smas well paying as your current one?


Better paying, way higher leverage.


On the face of it, "10k lines of code per hour" sounds like a ridiculous metric to the point of parody.


Saying how many lines of code you can write this way is also a bit like bragging that you are building world's heaviest airplane.


If you can’t understand your code, who can?


It’s not their code, and it’s not for them to understand. The endgame here is that code as we know it today is the “ASM” of tomorrow. The programming language of tomorrow is natural human-spoken language used carefully and methodically to articulate what the agent should build. At least this is the world we appear to be heading toward… quickly.


But the endgame is not here and will likely never be, because unlike ASM, LLMs are not deterministic. So what happens when you need to find the bug in the 100,000k LoC you generated in a few weeks that you've never read, and the agent can't help you ? And it happens a lot. I am not doing this myself so I can't comment, but I've heard many vibe coders commenting that a lot of the commits they do is about fixing the slop they outputted a week prior.

Personally, I keep trying OpenCode + Opus 4.6 and I don't find it that good. I mean it does an OK job, but the code is definitely less quality and at the moment I care too much about my codebase to let it grow into slop.


So most messaging apps rely on a phone number or centralized server to provide a means of making atleast the initial connection. In a purely P2P messaging system, how do I, as a user, find the other person I might want to talk to?


Third party info exchange. I don't see that as an issue though. For example, for discord, you also 100% exchanged usernames via a third party system. I mean, even phone numbers are exchanged via third parties. Now that I think about it, the only place where you can search people somewhat reliably are social media sites


I'm a fan of "face to face mutual qr code key exchange." I should implement that someday.


QR code key exchange is convenient, but I did not plan phone support.


Nobody does "face to face" key exchange like I imagine. Just two phones facing each other spamming QR codes for the other to read.

What I REALLY want is an app that builds a big bank of nonces between you and your peers over short range radio or QR codes and then lets you use a 1-time pad.

Ultimately, I'm only offering criticism because I have spent a lot of time working on exactly this problem, but I am not in a position to actually implement it. This is awesome and you should be proud of it.


I appreciate the criticism, really. In the current version users only have to exchange the username or peer ID via third party and then find each other on the DHT...

However, there is also another way, which is already implemented and I am currently writing the how-to on my blog site, and that is using "trusted users". Basically, instead of 2 users trying to find each other on the DHT, they can just export their profiles in the "Profile" section. That prompts them to create a shared secret and exports a ".kiyeovo" file. You send that file to the other party, they click on the "+" in the sidebar header ->"import trusted user", select the ".kiyeovo" file and voila!

I know it's not nearly as convenient as what you're describing, but it's just a more "trustable" way of creating a contact which is also not that inconvenient.


I can't say I needed yet another reason to hate the current state of LinkedIn, but I am not surprised in the slightest.


Two quick questions/thoughts:

1) Reading the changes, as a human, is not easy, having a document with a bunch editing marks in it, especially at the rate AI can make edits to things, would be very hard to keep track of, atleast that's how I feel.

2) Since you are tracking the context change forever, as a project developed over time, wouldn't the size of the context grow over time, with no upper bound. Would this cause memory pressure issues once we are talking about a huge codebase and a huge Changedown context?


This looks pretty cool, would love to try it.

One thing I'm trying to understand from the docs. I have 100s of playwright based BDD tests in my projects, especially the ones that are purely AI written. How does this interface with my existing tests? Does it scan the repo or is it meant to have it's own stand alone folder?


It doesn't interact with your code at all (yet). The tests are in English, so mostly it's not designed for developers.

It's an interesting idea, but currently other apps are way better setup to scan code (like vscode and claude code). What I have done is ask claude to scan the codebase and generate a full series of English language tests in a markdown file. Would be good to ingest that, but for now I'm just using cut and paste.


You could certainly build that today, mario tennis had ai controlled virtual tennis players 20+ years ago. You would simply need to put two of them opposite each other and you could create this. It's just not very compelling to watch two video game npcs do things.


I've seen a few bot vs bot videos of StarCraft II, like https://www.youtube.com/watch?v=uiW-r_OEvpA Anyway, I agree that the human drama makes the story more interesting.


Depends on how hard the question is.

Simple functions in small code bases, will probably work.

Once you get large code bases and more complex work, you'd have issues with the small LLM having the context it needs to actually solve the issue.


That doesn't seem terribly surprising, a human can quickly look through a grid of shirts to find one they like. ChatGPT would be guessing what they might want and the human would probably get a bad experience there with some regularity.


They have a solution and are trying to find a problem.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You