I've been thinking a lot about this lately. It seems like what is missing with most coding agents is a central source of truth. Before the truth of what the company was building and alignment was distributed, people had context about what they did and what others did and are doing.
Now the coding agent starts fresh each time and its up to you to understand what you asked it and provide the feedback loop.
Instead of chat -> code, I think chat -> spec and then spec -> code is much more the future.
the spec -> code phase should be independent from any human. If the spec is unclear, ask the human to clarify the spec, then use the spec to generate the code.
What happens today is that something is unclear and there is a loop where the agent starts to uncover some broader understanding, but then it is lost the next chat. And then the Human also doesn't learn why their request was unclear. "Memories" and Agents files are all ducktape to this problem.
Exactly this. The audit pass in Ossature is specifically for that "unclear spec" case, you resolve ambiguities in the spec before generation starts rather than discovering them mid-conversation and losing them the next session. Once the plan is clean, the LLM never needs to ask a clarifying question. Memories and agent files are patching over the fact that intent was never properly captured to begin with.
You know, that would actually be pretty fun and cool. Like if you had home automation set up with a "pet assistant", but it would only follow your commands if you made sure to keep it happy.
If it could somehow only work if I maintain the kitchen sink and counter, then maybe I'd be motivated to keep the house clean. The gacha game trains you.
I mean emails were and still are a huge security risk. Sometimes I'm more scared of employees opening and engaging with emails than I am than anything else.
Apple is terrible for business. Every portal and product require a new apple id. apple store and apple business can't be same apple id. your device id can't be the same as either. Its madness. Last count i have 4 apple ids that I have to shuffle around.
Floats break the basic expectation of == for round-trip verification, not due to programmer error, but because NaN is non-reflexive by spec. A bit-perfect round-trip can reproduce the exact bit pattern and still fail an equality check. The problem is intrinsic to the type, not the operator.
skills obviously are a temporary thing. same with teams. the models will just train on all published skills and ai teams are more or less context engineering. all of it can be replaced by a better model
GetDynasty.com | San Fransisco, CA | Sales (In-Person SF only) | Engineering (In-Person or US Remote initially OK)
Dynasty helps startup founders legally eliminate millions in capital gains taxes through QSBS trust planning. Today, sophisticated tax strategies are locked behind elite law firms and opaque processes. We’re building the software layer that makes advanced trust and tax infrastructure accessible, understandable, and fast.
We combine fintech, legal infrastructure, and thoughtful product design to turn what used to be months of paperwork and six-figure legal bills into a streamlined, guided platform. Our customers are venture-backed founders preparing for exits when the stakes are high and precision matters.
We’re early, fast-moving, and product-obsessed. This is a chance to help define the system of record for founder wealth.
Positions:
* Engineering: Hiring an experienced python / react engineer to help build the next generation of financial platforms. -> kyle@getdynasty.com
* Sales: We need to hire our second sales rep ASAP -> alessandro@getdynasty.com
If you want to work in SF, and want to join an extremely ambitious and fast growing startup, please reach out
Now the coding agent starts fresh each time and its up to you to understand what you asked it and provide the feedback loop.
Instead of chat -> code, I think chat -> spec and then spec -> code is much more the future.
the spec -> code phase should be independent from any human. If the spec is unclear, ask the human to clarify the spec, then use the spec to generate the code.
What happens today is that something is unclear and there is a loop where the agent starts to uncover some broader understanding, but then it is lost the next chat. And then the Human also doesn't learn why their request was unclear. "Memories" and Agents files are all ducktape to this problem.
reply