> long contexts are still expensive and can also introduce additional noise (if there is a lot of irrelevant info)
I think spec-driven generation is the antithesis of chat-style coding for this reason. With tools like Claude Code, you are the one tracking what was already built, what interfaces exist, and why something was generated a certain way.
I built Ossature[1] around the opposite model. You write specs describing behavior, it audits them for gaps and contradictions before any code is written, then produces a build plan toml where each task declares exactly which spec sections and upstream files it needs. The LLM never sees more than that, and there is no accumulated conversation history to drift from. Every prompt and response is saved to disk, so traceability is built in rather than something you reconstruct by scrolling back through a chat. I used it over the last couple of days to build a CHIP-8 emulator entirely from specs[2]. I have some more example projects on GitHub[3]
I've been thinking a lot about this lately. It seems like what is missing with most coding agents is a central source of truth. Before the truth of what the company was building and alignment was distributed, people had context about what they did and what others did and are doing.
Now the coding agent starts fresh each time and its up to you to understand what you asked it and provide the feedback loop.
Instead of chat -> code, I think chat -> spec and then spec -> code is much more the future.
the spec -> code phase should be independent from any human. If the spec is unclear, ask the human to clarify the spec, then use the spec to generate the code.
What happens today is that something is unclear and there is a loop where the agent starts to uncover some broader understanding, but then it is lost the next chat. And then the Human also doesn't learn why their request was unclear. "Memories" and Agents files are all ducktape to this problem.
Exactly this. The audit pass in Ossature is specifically for that "unclear spec" case, you resolve ambiguities in the spec before generation starts rather than discovering them mid-conversation and losing them the next session. Once the plan is clean, the LLM never needs to ask a clarifying question. Memories and agent files are patching over the fact that intent was never properly captured to begin with.
Hey, you seem to have similar view on this. I know ideas are cheap but hear me out:
You talk with agent A it only modifies this spec, you still chat and can say "make it prettier" but that agent only modifies the spec, the spec could also separate "explicit" from "inferred".
And of course agent B which builds only sees the spec.
User actually can care about diffs generated by agent A again, because nobody wants to verify diffs on agents generated code full of repetition and created by search and replace. I believe if somebody implements this right it will be the way things are done.
And of course with better models spec can be used to actually meaningfully improve the product.
Long story short what industry misses currently and what you seem to be understanding is that intent is sacred. It should be always stored, preferably verbatim and always with relevant context ("yes exactly" is obviously not enough). Current generation of LLMs can already handle all that. It would mean like 2-3x cost but seem so much worth it (and the cost on the long run could likely go below 1x given typical workflows and repetitions)
Right, the spec/build separation is exactly the idea and Ossature is already built that way on the build side.
I agree a dedicated layer for intent capture makes a lot of sense. I thought about that as well, I am just not fully convinced it has to be conversational (or free-form conversational). Writing a prompt to get the right spec change is still a skill in itself, and it feels like it'd just be shifting the problem upstream rather than actually solving it. A structured editing experience over specs feels like it'd be more tractable to me. But the explicit vs inferred distinction you mention is interesting and worth thinking through more.
It's just that we're lazy. After being able to chat, I don't see people going back. You can't just paste some error into the specs, you can't paste it image and say it make it look more like this. Plus however well designed the spec, something like "actually make it always wait for the user feedback" can trigger changes in many places (even for the sake of removing contradictions).
1. You can write a spec that builds something that is not what you actually wanted
2. You can write spec that is incoherent with itself or with the external world
3. You can write a spec that doesn't have sufficient mechanical sympathy with the tooling you have and so it requires you to all spec out more and more of the surrounding tech than you practically can.
All of those issues can be addressed by iterating on the spec with the help of agents. It's just an engineering practice, one that we have to become better at understanding
All three of these are real. The audit pass in Ossature is meant to catch the first two before generation starts, it reads across all specs and flags underspecified behavior, missing details, and contradictions. You resolve those, update the specs, and re-audit until the plan is clean. It's not perfect but it shifts a lot of the discovery earlier in the process.
The third point is harder. You still need to know your tooling well enough to write a spec that works with it. That part hasn't gone away.
Program defines the exact computer instructions. Most of the time you don't care about that level of detail. You just have some intent and some constraints.
Say "I want HN client for mobile", "must notify me about comments", you see it and you add "should support dark mode". Can you see how that is much less than anything in any programming language?
My own approach also has intent sitting at the top: intent justifies plan justifies code justifies tests. And the other way around, tests satisfy code, satisfy plan, satisfy intent. These threads bottom up and top down are validated by judge agents.
I also make individual tasks md files (task.md) which makes them capable of carrying intent, plan, but not just checkbox driven "- [ ]" gates, they get annotated with outcomes, and become a workbook after execution. The same task.md is seen twice by judge agents which run without extra context, the plan judge and the implementation judge.
I ran tests to see which component of my harness contributes the most and it came out that it is the judges. Apparently claude code can solve a task with or without a task file just as well, but the existence of this task file makes plans and work more auditable, and not just for bugs, but for intent follow.
Coming back to user intent, I have a post user message hook that writes user messages to a project scoped chat_log.md file, which means all user messages are preserved (user text << agent text, it is efficient), when we start a new task the chat log is checked to see if intent was properly captured. I also use it to recover context across sessions and remember what we did last.
Once every 10-20 tasks I run a retrospective task that inspects all task.md files since last retro and judges how the harness performs and project goes. This can detect things not apparent in task level work, for example when using multiple tasks to implement a more complex feature, or when a subsystem is touched by multiple tasks. I think reflection is the one place where the harness itself and how we use it can be refined.
claude plugin marketplace add horiacristescu/claude-playbook-plugin
source at https://github.com/horiacristescu/claude-playbook-plugin/tree/main
The hierarchy you describe (intent -> plan -> code -> tests) maps well to how Ossature works. The difference is that your approach builds scaffolding around Claude Code to recover structure that chat naturally loses, whereas Ossature takes chat out of the generation pipeline entirely. Specs are the source of truth before anything is generated, so there's no drift to compensate for, the audit and build plan handle that upfront.
The judge finding is interesting though. Right now verification during build for each task in Ossature is command-based, compile, tests, that kind of thing. A judge checking spec-to-code fidelity rather than (or maybe in addition to?) runtime correctness is worth thinking about.
Yes, judges should not just look for bugs, they should also validate intent follow, but that can only happen when intent was preserved. I chose to save the user messages as a compromise, they are probably 10 or 100x smaller than full session. I think tasks themselves are one step lower than pure user intent. Anyway, if you didn't log user messages you can still recover them from session files if they have not been removed.
One interesting data point - I counted word count in my chat messages vs final code and they came out about 1:1, but in reality a programmer would type 10x the final code during development. From a different perspective I found I created 10x more projects since I relied on Claude and my harness than before. So it looks user intent is 10x more effective than manual coding now.
I'm using something similar-ish that I build for myself (much smaller, less interesting, not yet published and with prettier syntax). Something like:
a->b # b must always be true if a is true
a<->b # works both ways
a=>b # when a happens, b must happen
a->fail, a=> fail # a can never be true / can never happen
a # a is always true
So you can write:
Product.alcoholic? Product in Order.lineItems -> Order.customer.can_buy_alcohol?
u1 = User(), u2=User(), u1 in u2.friends -> u2 in u1.friends
new Source() => new Subscription(user=Source.owner, source=Source)
Source.subscriptions.count>0 # delete otherwise
This is a much more compact way to write desired system properties than writing them out in English (or Allium), but helps you reason better about what you actually want.
Allium looks interesting, making behavioral intent explicit in a structured format rather than prose is very close to what I'm trying to do with Ossature actually.
Ossature uses two markdown formats, SMD[1] for describing behavior and AMD for structure (components, file paths, data models). AMDs[2] link back to their parent SMD so behavior and structure stay connected. Both are meant to be written, reviewed, and/or owned by humans, the LLM only reads the relevant parts during generation. One thing I am thinking about for the future is making the template structure for this customizable per project, because "spec" means different things to different teams/projects. Right now the format is fixed, but I am thinking about a schema-based way to declare which sections are required, their order, and basic content constraints, so teams can adapt the spec structure to how they think about software without having to learn a grammar language to do it (though maybe peg-based underneath anyway, not sure).
The formal approach you describe is probably more precise for expressing system properties. Would be interesting to see how practical it is to maintain it as a project grows.
Totally agreed! Ive had good success using claude code with Cucumber, where I start with the spec and have claude iterate on the code. How does ossature compare to that approach?
I like it a lot, I find the chat driven workflow very tiring and a lot of information gets lost in translation until LLMs just refuse to be useful.
How does the human intervention work out? Do you use a mix of spec and audit editing to get into the ready to generate state? How high is the success/error rate if you generate from tasks to code, do LLMs forget/mess up things or does it feel better?
The spec driven approach is potentially better for writing things from scratch, do you have any plans for existing code?
> How does the human intervention work out? Do you use a mix of spec and audit editing to get into the ready to generate state?
Yes, the flow is: you write specs then you validate them with `ossature validate` which parses them and checks they are structurally sound (no LLM involved), then you run `ossature audit` which flags gaps or contradictions in the content as INFO, WARNING, or ERROR level findings. The audit has its own fixer loop that auto-resolves ERROR level findings, but you can also run it interactively, manually fix things yourself, address the INFO and WARNING findings as you see fit, and rerun until you are happy. From that it produces a toml build plan that you can read and edit directly before anything is generated. You can reorder tasks, add notes for the LLM, adjust verification commands, or skip steps entirely. So when you run `ossature build` to generate, the structure is already something you have signed off on. There's a bit more details under the hood, I wrote more in an intro post[1] about Ossature, might be useful.
> The spec driven approach is potentially better for writing things from scratch, do you have any plans for existing code?
Right now it is best for greenfield, as you said. I have been thinking about a workflow where you generate specs from existing code and then let Ossature work from those, but I am honestly not sure that is the right model either. The harder case is when engineers want to touch both the code and the specs, and keeping those in sync through that back and forth is something I want to support but have not figured out a clean answer for yet. It's on the list, if you have any thoughts please feel free to open an issue! I want to get through some of the issues I am seeing with just spec editing workflow (and re-audit/re-planning) first, specifically around how changes cascade through dependent tasks.
Regarding success rate, each task requires a verification command to run and pass after generation and if it fails, a separate fixer agent tries to repair it using the error output. The number of retry attempts is configurable. I did notice that the more concise and clear the spec is the more likely it is for capable models to generate code that works (obviously) but that's what auditing is supposed to help with. One interesting case about the chip-8 emulator I mentioned above is that even mentioning the correct name of the solution to a specific problem was not enough, I had to spell out the concrete algorithm in the spec (wrote more details here[2]). But the full prompt and response for every task is saved to disk, so when something does go wrong one can read the exact prompt/response and fix-attempts prompt/response for each task.
I’m building something similar. It’s not public yet because it’s still early and I’m still working on exactly what it is supposed to be.
But the idea is similar in that I start with a spec and feed the LLM context that is a projection of the code and spec, rather than a conversation. The context is specific to the specific workflow stage (eg planning needs different context to implementing) and it doesn’t accumulate and grow (at least, the growth is limited and based on the tool call loop, not on the entire process).
My main goals are more focused context, no drift due to accumulated context, and code-driven workflows (the LLM doesn’t control the RPI workflow, my code does).
It’s built as a workflow engine so that it’s easy for me to experiment with and iterate on ideas.
I like your idea of using TOML as the artifact that flow between workflow stages, I will see if that’s something that might be useful for me too!
Very much the same thinking. Ossature already structures work that way at the plan level during audit, so curious to see where you take it. Happy to share more about the TOML approach if useful. Feel free to reach out (me at my domain)
I've answered this exact question in a previous hn comment thread a few weeks ago, maybe I should reconsider front-matter? My previous answer:
> Yeah, I did briefly consider front-matter, but ended up with inline @ tags because I thought it kept the entire document feeling like one coherent spec instead of header-data + body, front matter felt like config to me, but this is 0.0.1 so things might change :)
There are two problems with waterfall. First, if it takes too long to implement, the world moved on and your spec didn't move. Second, there are often gaps in the spec, and you don't discover them until you try to implement it and discover that the spec doesn't specify enough.
Well, for the first problem, if an AI can generate the code in a day or a week, the world hasn't moved very much in that time. (In the future, if everything is moving at the speed of AI, that may no longer be true. For now it is.)
The second problem... if Ossature (or equivalent) warns you of gaps rather than just making stuff up, you could wind up with iterative development of the spec, with the backend code generation being the equivalent of a compiler pass. But at that point, I'm not sure it's fair to call it "waterfall". It's iterative development of the spec, but the spec is all there is - it's the "source code".
You framed it better than I would. The part I'm still working through is making re-planning feel cheap when specs change. Right now if you change something early, downstream tasks get invalidated and the cascade isn't always obvious. Ideally when the project gets built, and then specs change, nothing of the generated code should change if an irrelevant part of the spec changed, this is a bit harder to do properly but I have some ideas.
I agree that, this is what makes it not waterfall. You're iterating on the spec and not backtracking from broken code. The spec is the "source code", replanning and rebuilding is just "recompiling".
IMO the raw Claude CLI is great for one-off interactive sessions, but as soon as you want repeatable multi-step workflows you’re either copy-pasting prompts forever or hacking your own solution manually. That’s exactly the gap these tools fill.
My take on a solution for this is https://ossature.dev — .smd spec markdown files + ossature audit / build that gives you DAG orchestration, SHA-traced increments, and tiny focused contexts.
Yeah bash scripts start clean but the sprawl kicks in quick as the workflow and project becomes more complex. Prompts get copied, deps turn manual, and maintenance of your workflow itself becomes the chore.
Ossature swaps that for structured SMDs and optional AMDs. Multiple specs build a clean DAG that drops into an editable plan.toml so everything stays traceable without the mess.
I use bash scripts. Both Claude and Vibe support all kinds of arguments if you need a prompt to “become a task”. Bash is also deterministic and easy to read and debug.
Yeah, I did briefly consider front-matter, but ended up with inline @ tags because I thought it kept the entire document feeling like one coherent spec instead of header-data + body, front matter felt like config to me, but this is 0.0.1 so things might change :)
Writing Swift is really fun! Last year I built a game and a macOS app (now working on another app) to get some proper hands on experience and I was very impressed. Wrote about it here[1].
I'd love for the experience outside of Xcode to get better (even on a mac, developing for macOS/iOS) but I have to say that I didn't find Xcode to be _that_ aweful. I've certainly used worse IDEs, which is partly why I dislike IDEs altogether.
I personally never seen anything as bad as XCode, but granted, I haven't used really old IDEs (always preferred just editors). Last year I built a few small apps using both XCode/Swift and Visual Studio/C# and using MS stack felt like you are living in the future, despite WinUI having a very dated UI syntax.
I'm working on an iOS app called Corefeed, a feed reader that mixes articles, podcasts, and YouTube channels into one timeline.
The initial idea was a feed reader where entries are not just sorted chronologically but also grouped into time buckets (last hour, today, last week, last month, etc...) so it's easy to show/hide entries in a bucket, mark entries in it as read/archived, and keep up with new posts even when subscribed to many feeds. I also wanted an excuse to play around with Foundation Models, so I added optional AI Digest and Briefing generated from whatever is currently filtered.
The idea evolved a bit as I worked on it though, and now it reads RSS, Atom, JSON feeds, podcast feeds (legacy and Podcasting 2.0), and YouTube channel feeds. It includes a podcast player, automatic tagging with Apple Intelligence when available, feed groups, and various other small touches. It's still very much in alpha and in need of testing and polish, so it probably won't hit the App Store anytime soon.
Thanks! I wasn't aware of rcmd, seems interesting!
> Will future updates cost?
Nope! Once you buy it, all future updates are included. I plan to keep maintaining it with bug fixes and small improvements based on user feedback, but my goal is to keep the app focused and lightweight rather than adding tons of features.
It's not an easy task, and when there's already lots of established practices, habits, and opinions, it becomes even more difficult to get around the various pain points. There's been many attempts: pip (the standard) is slow, lacks dependency resolution, and struggles with reproducible builds. Conda is heavy, slow to solve environments, and mixes Python with non-Python dependencies, which makes understanding some setups very complicated. Poetry improves dependency management but is sluggish and adds unnecessary complexity for simple scripts/projects. Pipenv makes things simpler, but also has the same issue of slow resolution and inconsistent lock files. Those are the ones I've used over the years at least.
uv addressed these flaws with speed, solid dependency resolution, and a simple interface that builds on what people are already used to. It unifies virtual environment and package management, supports reproducible builds, and integrates easily with modern workflows.
Quality, security, and code that doesn't fall apart later matter. I don't want
AI slop children books to be a thing. I hear you on AI making it easier for people to build stuff, but calling valid concerns "gatekeeping" is a bit off.
With that said, I really like this site and the approach!
If those things actually mattered they would be rewarded in the "free marketplace of ideas". But they aren't and never have been. The most wealthy companies in the world aren't wealthy because of the quality of their code. That's why literally zero Very Large Organizations prioritize code quality. Marketing matters far more than quality and the budgets demonstrate this. You're just holding it wrong.
You're missing the point. Some slice of the market ignoring quality doesn't make it unimportant. Those companies get burned by tech debt and security holes all the time. Brushing off quality and security as pointless is shortsighted.
The main problem Firefox has really is Mozilla. And Orion is neat but too immature and the direction Kagi is taking in general seems to be moving further away from a indie company with a single purpose. I hope they manage to steer themselves back into what got people excited about them to begin with.
I think the author is overselling the benefits of communities a bit. Sure, groups can boost motivation through approval-seeking and availability bias, but they can also trap you in groupthink or misaligned priorities. I’d say the Brawl Stars grind example (despite author disliking the game) isn’t really a win.
Communities can hijack your goals, pulling you toward their agenda instead of yours. Overreliance on them risks eroding self-discipline when the group fades.
Instead, define your goals clearly and use communities sparingly, for knowledge exchange, not validation. Relying on communities too much can leave you stuck in an echo chamber, chasing approval over purpose.
I think spec-driven generation is the antithesis of chat-style coding for this reason. With tools like Claude Code, you are the one tracking what was already built, what interfaces exist, and why something was generated a certain way.
I built Ossature[1] around the opposite model. You write specs describing behavior, it audits them for gaps and contradictions before any code is written, then produces a build plan toml where each task declares exactly which spec sections and upstream files it needs. The LLM never sees more than that, and there is no accumulated conversation history to drift from. Every prompt and response is saved to disk, so traceability is built in rather than something you reconstruct by scrolling back through a chat. I used it over the last couple of days to build a CHIP-8 emulator entirely from specs[2]. I have some more example projects on GitHub[3]
1: https://github.com/ossature/ossature
2: https://github.com/beshrkayali/chomp8
3: https://github.com/ossature/ossature-examples
reply