This piece appears to be a response to [my recent essay on craft, alienation, and LLMs][1], so let me engage with it directly.
The argument collapses every social and structural explanation into a single move: the individual chose it. This is the classic libertarian reduction, and it has a well-known failure mode. Under this framework, there is no coherent distinction between a choice made under duress and a choice made freely. If a developer uses LLM coding assistants because their livelihood depends on keeping pace with colleagues who do, and the author's response is that no one forced them, well, no one forces a person at gunpoint to hand over their wallet either. The gun is still there.
The author acknowledges, mid-essay, that the system “can change incentives and tradeoffs.” But this is precisely what a structural analysis is. Once you admit that incentives can be arranged such that a person has no viable path except the one the system rewards, you have already conceded the core Marxian point. Calling it “alienation” or not is just terminology.
What the alienation framework actually claims is not that individuals don't choose. It's that the conditions under which those choices are made matter morally and analytically. My own essay is careful about this: I noted explicitly that the tension between craft and efficiency doesn't vanish under different political arrangements. The question survives capitalism; capitalism just answers it harshly. Dismissing this as a “denial of the craftsman” misreads the argument.
On LLM capabilities: the claim that none of these problems can be solved by LLMs (understanding systems, architecture decisions, debugging) reads as confident as of roughly two years ago. The frontier has moved. Coding agents are already handling non-trivial architectural reasoning in constrained domains, and the trajectory is visible. Anchoring the argument to current limitations, stated as permanent ones, is a move that ages badly.
What strikes me most about this acquisition isn't the AI angle. It's the question of why so many open source tools get built by startup teams in the first place.
I maintain an open source project funded by the Sovereign Tech Fund. Getting there wasn't easy: the application process is long, the amounts are modest compared to a VC round, and you have to build community trust before any of that becomes possible. But the result is a project that isn't on anyone's exit timeline.
I'm not saying the startup path is without its own difficulties. But structurally, it offloads the costs onto the community that eventually comes to depend on you. By the time those costs come due, the founders have either cashed out or the company is circling the drain, and the users are left holding the bag. What's happening to Astral fits that pattern almost too neatly.
The healthier model, I think, is to build community first and then seek public or nonprofit funding: NLnet, STF, or similar. It's slower and harder, but it doesn't have a built-in betrayal baked into the structure.
Part of what makes this difficult is that public funding for open source infrastructure is still very uneven geographically. I'm based in Korea, and there's essentially nothing here comparable to what European developers can access. I had no choice but to turn to European funds, because there was simply no domestic equivalent. That's a structural problem worth taking seriously. The more countries that leave this entirely to the private sector, the more we end up watching exactly this kind of thing play out.
A lot of great open source comes out of startups because startups are really good at shipping fast and getting distribution (open source is part of this strategy). Users can try the tool immediately, and VC funding can put a lot of talent behind building something great very quickly.
The startup model absolutely creates incentive risk, but that’s true of any project that becomes important while depending on a relatively small set of maintainers or funders.
I’m not sure an acquisition is categorically different from a maintainer eventually moving on or burning out. In all of those cases, users who depend on the project take on some risk. That’s not unique to startups; it’s true of basically any software that becomes important.
There’s no perfect structure for open source here - public funding, nonprofit support, and startups all suck in their own ways.
And on the point you make about public funding being slow: yeah, talented people can’t work full-time on important things unless there’s serious funding behind it. uv got as good as it is because the funding let exceptional people work on it full-time with a level of intensity that public funding usually does not.
That's fair, and I don't really blame anyone for taking the startup route. It's often the only realistic path to working full-time on something you care about. My point is more that it shouldn't have to be. The more public funding flows into open source infrastructure, the less that tradeoff becomes necessary in the first place. Korea being almost entirely absent from that picture is part of why I feel this so keenly.
I cannot agree more though I have little experience in open source. I knew that Korean environment for open source software would be touch before coming back from Europe, it seems much easier to target international traction rather than focusing on domestic interest.
Personally, I'd like to know, since you have been active in Korea, if there is any groups that I can attend to.
uvs success is downstream of paying like 10 very good rust tooling developers to work on uv full time.
Full time effort put into tools gets you a looooot of time to make things work well. Even ignoring the OSS stuff, many vendors seem to consider their own libs to be at best “some engineers spend a bit of time on them” projects!
Yeah, it's pretty hard to argue that at least for the python tooling use case, the typical open source methodologies had failed to solve things as well in a couple decades as uv did in a few years as a startup. The lesson we should take from that is probably more up for debate.
Most likely, because it is less money :-p. But also because it is less known and harder, as you already mentioned. Personally, I'm based in Mexico, and I would never have thought about trying to get nonprofit funding for a community project, nor would I know where to start to get that.
I don't think government funded projects are any more secure. The political climate changes once in a few years and we had a lot of examples of previous decisions being scrapped. Limux in München was scrapped overnight, usaid was shut down in no time.
You get a little more stability for a lot of headache but nobody guarantees that in a few years political stance won't change drastically and the fund won't be cut or even closed.
> it doesn't have a built-in betrayal baked into the structure
Astral was founded as a private company. Its team has presumably worked hard to build something valuable. Calling their compensation for that work 'betrayal' is unfair.
Community-based software development sounds nice. But with rare exception, it gets outcompeted by start-ups. Start-ups work, and they work well. What is problematic is the tech giants' monopoly of the acquisition endpoint. Figuring out ways for communities to compete with said giants as potential acquirers is worth looking into–maybe public loans to groups of developers who can show (a) they're committed to keep paying for the product and (b) aren't getting kicked back.
I was looking into Fedify just yesterday! I'm trying to decide whether to A) try and make my blog an activity pub instance of some sort, B) host my own Mastodon instance, or C) Use someone else's Mastodon server and link to my blog POSSE style. If I go with option A (which somehow feels like how things are _supposed_ to work) Fedify looks like the way to make it happen.
> It's the question of why so many open source tools get built by startup teams in the first place
The capitalist answer would be that markets are more efficient than governments at capital allocation and thus private companies are better positioned to develop software that solves real-world-problems, and in this case are so much more efficient at it that the stuff those companies give away for free as open source still dwarves publicly funded efforts.
My own opinion is that there are plenty of software problems worth solving that don't fit neatly into that bucket and you're likely right that some increased degree of public funding around them is worthwhile. In the US that tends to end up flowing through the university systems. I mean, the internet itself was DoD funding going to university labs.
open source allows you to build community trust must faster, and community trust/adoption is key
I don't see any betrayal here, since the tools are still OSS - yeah OpenAI might take it a different direction and add a bunch of stuff I don't like/want, but I can still fork
The one of the biggest advantages of PostgreSQL is GiST (Generalized Search Tree) which is based on the theory of indexability.
> One advantage of GiST is that it allows the development of custom data types with the appropriate access methods, by an expert in the domain of the data type, rather than a database expert.
> Traditionally, implementing a new index access method meant a lot of difficult work. It was necessary to understand the inner workings of the database, such as the lock manager and Write-Ahead Log. The GiST interface has a high level of abstraction, requiring the access method implementer only to implement the semantics of the data type being accessed. The GiST layer itself takes care of concurrency, logging and searching the tree structure.
> [...]
> So if you index, say, an image collection with a PostgreSQL B-tree, you can only issue queries such as "is imagex equal to imagey", "is imagex less than imagey" and "is imagex greater than imagey". Depending on how you define "equals", "less than" and "greater than" in this context, this could be useful. However, by using a GiST based index, you could create ways to ask domain-specific questions, perhaps "find all images of horses" or "find all over-exposed images".
If you are already using PostgreSQL though have not known about this fact, I highly recommend you to learn about GiST. It is the most powerful feature of PostgreSQL as I know.
> Node sucks for general web apps because you have to program everything asynchronously.
To be fair, the exact problem is not async itself, but forcing CPS (continuation-passing style) for serial routines. For example, gevent and eventlet use greenlets (coroutines for Python) to avoid unnecessary callbacks in serial routines.
You can do RAII using `with` or `using` blocks, but you can't control execution with those like you can with blocks - the body of the `with` or `using` statement will always execute once. You can't prevent it from executing (unless you throw an exception), and you can't re-execute it with different values in the `as` clause.
It’s an evil thing originated from JavaScript. PHP always has borrowed evil things from other languages. As a result, PHP keeps getting more evil by the version.
The argument collapses every social and structural explanation into a single move: the individual chose it. This is the classic libertarian reduction, and it has a well-known failure mode. Under this framework, there is no coherent distinction between a choice made under duress and a choice made freely. If a developer uses LLM coding assistants because their livelihood depends on keeping pace with colleagues who do, and the author's response is that no one forced them, well, no one forces a person at gunpoint to hand over their wallet either. The gun is still there.
The author acknowledges, mid-essay, that the system “can change incentives and tradeoffs.” But this is precisely what a structural analysis is. Once you admit that incentives can be arranged such that a person has no viable path except the one the system rewards, you have already conceded the core Marxian point. Calling it “alienation” or not is just terminology.
What the alienation framework actually claims is not that individuals don't choose. It's that the conditions under which those choices are made matter morally and analytically. My own essay is careful about this: I noted explicitly that the tension between craft and efficiency doesn't vanish under different political arrangements. The question survives capitalism; capitalism just answers it harshly. Dismissing this as a “denial of the craftsman” misreads the argument.
On LLM capabilities: the claim that none of these problems can be solved by LLMs (understanding systems, architecture decisions, debugging) reads as confident as of roughly two years ago. The frontier has moved. Coding agents are already handling non-trivial architectural reasoning in constrained domains, and the trajectory is visible. Anchoring the argument to current limitations, stated as permanent ones, is a move that ages badly.
[1]: https://writings.hongminhee.org/2026/03/craft-alienation-llm...