How do you manage secrets in this setup? Are they injected at build time? Or does it require some manual setup by the user once the system is up and running?
I don't know how OP manages their secrets, but I am running NixOs and am letting 1Password manage all my secrets. 1Password can manage SSH agents, can inject environment variables and manage passwords/keys in the browser. All I need to do when I setup a new machine with NixOS is connect my 1Password to its account manually, after that it's all automated.
The focus is on downstream, but is upstream ready for this speed up?
The linked blog post draws comparisons to the industrial revolution however in the industrial revolution the speed up caused innovation upstream not downstream.
The first innovation was mechanical weaving. The bottleneck was then yarn. This was automated so the bottleneck became cotton production, which was then mechanised.
So perhaps the real bottleneck of being able to write code faster is upstream.
Can requirements of what to build keep up with pace to deliver it?
Id be keen to read/hear more about the experiment you've been undertaking as I too have been thinking the impact on the design/architecture/organising of software.
The focus mainly seems to be on enhancing existing workflows to produce code we currently expect - often you hear its like a junior dev.
The type of rethinking you outlined could have code organised in such a way a junior dev would never be able to extend but our 'junior dev' LLM can iterate through changes easily.
I care more about the properties of software e.g. testable, extendable, secure than how it organised.
Gets me to think of questions like
- what is the correlation between how code is organised vs its properties?
- what is the optimal organisation of code to facilitate llms to modify and extend software?
I'm especially pleased with how explicit it makes the inner dependency graph. Today I'm tinkering with pact (https://docs.pact.io/). I like that I'm forced to add the pact contracts generated during consumer testing as flake outputs (so they can then be inputs to whichever flake does provider testing). It's potentially a bit more work than it would be under other schemes, but it also makes the directionality of the dependency into a first class citizen and not an implementation detail. Otherwise it would be easy to forget which batch of tests depends on artifacts generated by the other.
I suppose there's things like Bazel for that sort of thing also but I don't think you can drop an agent into a bazel... thingy... and expect it to feel at home.
I was referring to attributes of the ComfyUI software/project, not the idea of serving an image generation API to be clear. There are several of those providers.
i dont believe there is a viable use case for large scale AI-generated images as there is for text... except for porn, but many orgs with SAAS capabilities wouldn't touch that