For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more raphinou's commentsregister

Here is other dev related music: https://marcus.bointon.com/music/



I'm working on an rest API server backed by a git repo. Having an actor responsible for all git operations saved me from a lot of trouble as having all git operations serialised freed me from having to prevent concurrent git operations.

Using actors also simplified greatly other parts of the app.


Isn’t that just serializing through a queue, aka. the producer consumer pattern? Which I think the correct solution for most common concurrency problems.

For something to be an actor, it should be able to:

- Send and receive messages

- Create other actors

- Change how the next message is handled (becomes in Erlang)

I think the last one is what makes it different it from simple message passing, and what makes it genius: state machines consuming queues.


Agreed, (heirarchical if you must) state machines consuming queues and writing to queues via messages wins. If you are FP minded like me, then you are set up to cleanly separate IO to the edges and have a functional core imperative shell hexagonal architecture for less additional overhead thsn a standard java beans style OO logical design.


So you're just using actors to limit concurrency? Why not use a mutex?


This might be a question of personal preference. At the design stage I already find it more approachable to think in separated responsibilities, and it naturally translates to actors. Thinking about the app, it's much reasier for me to thin "send the message to the actor" than call that function that uses the necessary mutex. With mutexes, I think the separation of concerns is not as strong, and you might end up with a function taking multiples mutexes that might interfere. With the actor model, I feel there is less risk (though I'm sure this would be questioned by seasoned mutex users).


In this simple case they're more or less equivalent if the only task is limiting concurrency, but in general usage of mutexes multiplies and soon enough someone else has created a deadlock situation.

Extending it however reveals some benefits, locking is often for stopping whilst waiting for something enqueued can be parallell with waiting for something else that is enqueued.

I think it very much comes down to history and philosophy, actors are philosophically cleaner (and have gained popularity with success stories) but back in the 90s when computers were physically mostly single-threaded and memory scarce, the mutex looked like a "cheap good choice" for "all" multithreading issues since it could be a simple lock word whilst actors would need mailbox buffering (allocations... brr),etc that felt "bloated" (in the end, it turned out that separate heavyweight OS supported threads was often the bottleneck once thread and core counts got larger).

Mutexes are quite often still the base primitive at the bottom of lower level implementations if compare-and-swap isn't enough, whilst actors generally are a higher level abstraction (better suited to "general" programming).


Atomic operations, memory barriers, condition variables, thread/"virtual processor" scheduling are philosophically cleaner since they're what the specified hardware/OS concurrency model actually provides, and can implement all of locks, mutexes, structured concurrency, arbitrary queues, actors etc. etc.


I was implicitly including all those with mutexes in the last sentence, it might be easier to reason about each in isolation because you're an experienced programmer that can/want to reason about the details.

In practice you want to keep the sharp knives away from junior programmers in larger contexts, handling a few ones in a limited scope is ok but when you start to reason about larger systems, actors in concurrent contexts can be explained even to junior programmers in terms of conveyor belts of messages or a similar analogy.

I don't know if you heard the story of how C++ was used and a large part of the collapse of that project was due to C++, OO and/or threading issues and how Erlang was then used in a crash project to successfully write a replacement on the same HW.

Some people claim that functional programming and/or actor systems are inherently superior, I don't really agree on that assessment, it's more that the actor patterns were easily handled in a distributed system and when used by less seasoned programmers it was possible for them to create their parts in mostly "single-threaded" contexts + message passing without causing trouble by using lower level concurrency primitives.

Really, imagine 500+ mid-junior programmers banging away concurrent code on a larger project that needs to be shipped soon.

This is what I mean by philosophically cleaner.


You are using mutexes, they are on the Actor message queues, amongst other places. "Just use mutexes" suggests a lack of experience of using them, they are very difficult to get both correct and scalable. By keeping them inside the Actor system, a lot of complexity is removed from the layers above. Actors are not always the right choice, but when they are they are a very useful and simplifying abstraction.

Horses for courses, as they say.


Lock-free queues and 16-core processors exist though. I use actors for the abstraction primarily anyway.


Can you share some insights why mutexes are difficult to get correct and scalable?


Because actors were invented to overcome deadlocks caused by mutexes. See page 137. With mutexes you can forget concurrency safety.


I’m pretty sure it’s possible to deadlock an actor system.


A bad one, yes. But it was esp. invented to prevent the mutual exclusion problem. And good actor systems do so.


I use ai to develop, but at every code review I find stuff to be corrected, which motivates me to continuing the reviews. It's still a win I think though. I've incrementally increased my use of ai in development [1], but I'm at a plateau now I think. I don't plan to go over to complete vibe coding for anything serious or to be maintained.

1: https://asfaload.com/blog/ai_use/


My use of LLMs for programming has evolved drastically in 6 months, to the point I decided to write about it. Posting it here as I'm curious to read experiences by other programmers!


I could imagine using this to access a beefier machine besides my main work computer. But indeed paying for stopped VM is difficult to sell. There was a suggestion to propose pre installed tools in different images which I find a good idea. Otherwise the workflow all by ssh is cool!


I use agents in a container and persist their config like you suggest. After seeing some interest I shared my setup at https://github.com/asfaload/agents_container It works fine for me on Linux.


I put all my agents in a docker file in which the code I'm working on is mounted. It's working perfectly for me until now. I even set it up so I can run gui apps like antigravity in it (X11). If anyone is interested I shared my setup at https://github.com/asfaload/agents_container


It won’t save you from prompt injektions that attack your network.


Shameless plug, in case you're interested: https://github.com/EstebanForge/construct-cli

Let me know if you give it a go ;)


Interesting, any plans to add LiteLLM (https://github.com/BerriAI/litellm) and Kilocode (https://github.com/Kilo-Org/kilocode)?



Will check those out :)


In theory the docker container should only have the projects directory mounted, open access to the internet, and thats it. No access to anything else on the host or the local network.

Internet to connect with the provider, install packages, and search.

It's not perfect but it's a start.


Docker containers run in their separate isolated network


of course, I'm not pretending this is a universal remedy solving all the problems. But I will add a note in the readme to make it clear, thanks for the feedback!


Did you document this somewhere? I'm interested to know more


Nah, first time I’ve mentioned it anywhere. Happy to answer questions, if there’s interest maybe this could be my reason for a first blog post.


I would encourage you to write about it as well. It seems interesting and unconventional.

I used to tinker a lot with my systems but as I gotten older and my time became more limited, I've abandoned a lot of it and now favor "getting things done". Though I still tinker a lot with my systems and have my workflow and system setup, it is no longer at the level of re-compiling the kernel with my specific optimization sort of thing, if that makes sense. I am now paid to "tinker" with my clients' systems but I stay away from the unconventional there, if I can.

I did reach a point where describing systems is useful at least as a way of documenting them. I keep on circling around nixos but haven't taken the plunge yet. It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments. So your approach is intriguing.


> It feels like containerfiles are an easier approach but they(at least docker does) sort of feel designed around describing application environments as opposed to full system environments.

They absolutely are! I actually originally just wanted a base container image for running services on my hosts that a.) I could produce a full source code listing for and b.) have full visibility over the BoM, and realized I could just ‘FROM scratch’ & pull in gentoo’s stage3 to basically achieve that. That also happens to be the first thing you do in a new gentoo chroot, and I realized that pretty much every step in the gentoo install media that you run after (installing software, building the kernel, setting up users, etc) could also be run in the container. What are containers if not “portable executable chroots” after all? My first version of this build system was literally to then copy / on the container to a mounted disk I manually formatted. Writing to disk is actually the most unnatural part of this whole setup since no one really has a good solution for doing this without using the kernel; I used to format and mount devices directly in a privileged container but now I just boot a qemu VM in an unprivileged container and do it in an initramfs since I was already building those manually too. I found while iterating on this that all of the advantages you get from Containerfiles (portability, repeatability, caching, minimal host runtime, etc) naturally translated over to the OS builder project, and since I like deploying services as containers anyways there’s a high degree of reuse going on vs needing separate tools and paradigms everywhere.

I’ll definitely write it up and post it to HN at some point, trying to compact the whole project in just that blurb felt painful.


Thanks for sharing! Definitely interested in reading further about the project.


I would also be very interested in reading that blog post!


same here! very interesting :)


Not what was mentioned by parent but I've been working on an embedded Linux build system that uses rootfs from container images: https://makrocosm.github.io/makrocosm/

The example project uses Alpine base container images, but I'm using a Debian base container for something else I'm working on.


Honestly this is just sorta a Tuesday for an advanced Gentoo user? There are lots of ways to do this documented on the Gentoo wiki. Ask in IRC or on the Forum if you can't find it. "Catalyst" is the method used by the internal build systems to produce images, for instance https://wiki.gentoo.org/wiki/Catalyst.


Also a postgres user. Wondering why MySQL wire protocol and not pgsql's: did the mysql choice have advantages compared to pgsql in this case?


You point out a question that I spent months thinking about. I personally love Postgres, heck I initially even had a version that will talk postgres wire but with SQLite only syntax. But then somebody pointed me out my WordPress demo, and it was obvious to me that I have to support MySQL protocol, it's just a protocol. Underlaying technology will stay independent from what I choose.


Related, Corrosion has experimental support for the pgsql wire protocol (limited to sqlite-flavored SQL queries): https://superfly.github.io/corrosion/api/pg.html


I always run my agents in a container with the source code directory mounted. That way I can reasonably be confident I may let it work without fearing destructive actions to my system. And I'm a git reset away to restore source code.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You