I defaulted to Biome for all greenfield projects a year ago, and at this point you would have to drag me kicking and screaming back to ESLint and Prettier. I also defaulted to Bun and still think Bun is leagues better than Node.js but I now have my doubts about its future after seeing the OpenCode devs consciously minimize their dependency on Bun for strategic reasons.
They do have a hot mess with traction amongst developers. Codex is far behind Claude Code (in both the GUI and TUI forms), and OpenAI's chief of applications recently announced a pivot to focus more on "productivity" (i.e. software and enterprise verticals) because B2B yields a lot more revenue than B2C.
I don't get the benefit. Yes, agents should not have access to API keys because they can easily be fooled into giving up those API keys. But what's to prevent a malicious agent from re-using the honest agent's fake API key that it exfiltrates via prompt injection? The gateway can't tell that the request is coming from the malicious agent. If the honest agent can read its own proxy authorization token, it can give that up as well.
It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.
Precisely. You absolutely have to ensure that random agents can't join your local network, which means you need a deterministic orchestrator or an AI orchestrator that only spins up a handful of vetted agents.
People are addressing that gap. I have a secure agent framework that uses a tool gateway hooked up to OPA (https://github.com/sibyllinesoft/smith-core), this solves the credential issue as the credentials live in the tools, and the authz issue, as OPA policy controls who does what.