This is incorrect. Machine Learning is a term that refers to numerical as opposed to symbolic AI. ML is a subset of AI as is Symbolic / Logic / Rule based AI (think expert systems). These are all well established terms in the field. Neural Networks include deep learning and LLMs. Most AI has gone the way of ML lately because of the massive numerical processing capabilities available to those techniques.
One of the biggest problems I've noticed with younger computer users is that they have no idea where they saved a file. Having to drag it to a specific folder seems like it would be harder to forget in that case.
If it paid for people's lives and sustained itself, that sounds like a huge success to me. There's a part of me that thinks, maybe we'd all be better off if we set the bar for success of a business at "sustains the lives of the people who work there and itself is sustainable."
The one thing that sold me on Rust (going from C++) was that there is a single way errors are propagated: the Result type. No need to bother with exceptions, functions returning bool, functions returning 0 on success, functions returning 0 on error, functions returning -1 on error, functions returning negative errno on error, functions taking optional pointer to bool to indicate error (optionally), functions taking reference to std::error_code to set an error (and having an overload with the same name that throws an exception on error if you forget to pass the std::error_code)...I understand there's 30 years of history, but it still is annoying, that even the standard library is not consistent (or striving for consistency).
Then you top it on with `?` shortcut and the functional interface of Result and suddenly error handling becomes fun and easy to deal with, rather than just "return false" with a "TODO: figure out error handling".
There's a difference between data transport and data hosting. Modern expectations of messengers seem to blur this line and it's better if it's not blurred.
Incidentally: The reason why they blur it is because of 2 network asymmetries prevalent since the 1990's that enforced a disempowering "all-clients-must-go-through-a-central-server model" of communications. Those 2 asymmetries are A) clients have lower bandwidth than servers and B) IPv4 address exhaustion and the need/insistence on NAT. It's definitely not practical to have a phone directly host the pictures posted in its group chats, but it would be awesome if the role of a messaging app's servers was one of caching instead of hosting.
In the beginning though: the very old IRC was clear on this; it was a transport only, and didn't host anything. Anything relating to message history was 100% a client responsibility.
And really I have stuck with that. My primary expectation with messaging apps is message transport. Syncing my message history on disparate devices is cool, and convenient, but honestly I don't really need it in a personal capacity if each client is remembering messages. I don't understand how having to be responsibile for the management of my own data is "less control of my life," it seems like more control. And ... I'm not sure I care about institutional entitlement to archive stuff that is intended to be totally personal.
I understand companies like to have group chats, and history may be more useful and convenient there, but that's why I'm not ever going to use Teams for personal purposes. But I'm not going to scroll back 10 years later on my messaging apps to view old family pictures. I'm going to have those saved somewhere.
Fun fact that was dredged up because the author mentions Australia: GPS points change. Their example coordinates give 6 decimal places, accurate to about 10-15cm. Australia a few years back shifted all locations 1.8m because of continental drift they’re moving north at ~7cm/year). So even storing coordinates as a source of truth can be hazardous. We had to move several thousand points for a client when this happened.
> If that asset generates revenue for 120 years, then it’s slightly more valuable than an asset that generates revenue for 119 years, and considerably more valuable than an asset that generates revenue for 20 years.
Not so, because of net present value.
The return from investing in normal stocks is ~10%/year, which is to say ~670% over 20 years, because of compounding interest. Another way of saying this is that $1 in 20 years is worth ~$0.15 today. A dollar in 30 years is worth ~$0.05 today. A dollar in 40 years is worth ~$0.02 today. As a result, if a thing generates the same number of dollars every year, the net present value of the first 20 years is significantly more than the net present value of all the years from 20-120 combined, because money now or soon from now is worth so much more than money a long time from now. And that's assuming the revenue generated would be the same every year forever, when in practice it declines over time.
The reason corporations lobby for copyright term extensions isn't that they care one bit about extended terms for new works. It's because they don't want the works from decades ago to enter the public domain now, and they're lobbying to make the terms longer retroactively. But all of those works were already created and the original terms were sufficient incentive to cause them to be.
Usenet, as far as I remember, used to be a fucking hell to maintain right. With each server having to basically mirror everything, it was a hog on bandwidth and storage, and most server software at its heyday was a hog on filesystems of its day (you had to make sure you have plenty of inodes to spare).
The other day, I logged into Usenet using eternalseptember, and found out that it consisted in 95% of zombies sending spam you could recognize from the millenium start. On one hand, it made me feel pretty nostalgic. Yay, 9/11 conspiracy theories! Yay, more all-caps deranged Illuminati conspiracies! Yay, Nigerian princes! Yay, dick pills! And an occasional on-topic message which strangely felt out of place.
On the other hand, I felt like I was in a half-dark mall bereft of most of its tenants, where the only place left is 85-year old watch repair shop and a photocopy service on the other end of the floor. On still another hand, turns out I haven't missed much by not being on Usenet, as all-caps deranged conspiracy shit is quite abound on Facebook.
I would welcome a modern replacement for Usenet, but I feel like it would need a thorough redesign based on modern connectivity patterns and computing realities.
Buggy unsafe blocks can affect code anywhere (through Undefined Behavior, or breaking the API contract).
However, if you verify that the unsafe blocks are correct, and the safe API wrapping them rejects invalid inputs, then they won't be able to cause unsafety anywhere.
This does reduce how much code you need to review for memory safety issues. Once it's encapsulated in a safe API, the compiler ensures it can't be broken.
This encapsulation also prevents combinatorial explosion of complexity when multiple (unsafe) libraries interact.
I can take zlib-rs, and some multi-threaded job executor (also unsafe internally), but I don't need to specifically check how these two interact.
zlib-rs needs to ensure they use slices and lifetimes correctly, the threading library needs to ensure it uses correct lifetimes and type bounds, and then the compiler will check all interactions between these two libraries for me. That's like (M+N) complexity to deal with instead of (M*N).
With huge respect, I’m an European SWE making a little more than 3k net (which means that what’s left after all taxes).
I own a nice house in a countryside village that I bought recently (so at the current market price), 10 min walking distance from the train station. I can afford premium quality food, I have enough money (and time !) to go on vacation 4 to 5 weeks per year (not just holidays but going abroad as a tourist). I own two cars. I’ll have a retirement.
Life hasn’t been cool on me on the last decade : I had to go under a 100+k surgery, I now take a treatment of about 150€/month. My grandmother had a stroke and is now living hospitalized under my dad’s roof. I did a burnout and stayed 1 year at home
to recover. And you know what ? Everything of this had barely any impact on our finances. Everything health related : 0 impact.
Now everything is fine, my health is better, I still have strong savings, still own my house, my grandmother is greatly taken care of…
I would never exchange that for the extra 4k I could lose at any moment without notice because life.
I made the switch to Kotlin around 2018; after having been using Java since 1995. Java is a choice these days, not a necessity. Kotlin is a drop in replacement for Java in 100% of it's core use cases. No exceptions that I know of. Doesn't matter whether you do Android or Spring Boot. It kind of does it all. And it's actually really good at dealing with creaky old Java APIs. Extension functions (one of the party tricks modern Java hasn't yet picked up from Kotlin) are amazing for adapting old code to be a bit less tedious to deal with.
You don't really lose anything (APIs, tools, etc.); but you gain a lot. I still use Spring. But I've now used it longer with Kotlin than with Java.
And the nice thing with Kotlin is that it is gaining momentum as its own ecosystem. I'm increasingly biased to not touching old school Java libraries. Those lock me into the JVM and I like having the option of running things in wasm, in the browser or on mobile. Our frontend is kotlin-js and it runs a lot of the same code we run in our Spring Boot server. Kotlin multi platform is nice. I've published several libraries on Github that compile for platforms that I don't even have access to. Supposedly my kt-search client (for opensearch and elasticsearch) should work on an Apple watch. I've not gotten around to testing that just yet and I don't see why you'd want that. But it definitely builds for it. I had one person some time ago trying it out on IOS; same thing (I'm an Android user).
Ecosystems are important. But it's also important to not get too hung up on them. I say that as somebody that made the bet on Java when there was essentially no such thing as a Java ecosystem. It was all new. Kind of slow and wonky. And there were a lot of things that didn't quite work right. But it had lots of people working on fixing all of those things. And that's why it's so dominant now. People are more important than the status quo.
Sometimes you just have to cut loose from the past and not go with something safe but very boring like Delphi, Visual Basic, Perl, and several other things that were quite popular at the time and didn't quite make it. They're still around and there's a half decent library ecosystem even probably. But let's just say none of those things are obvious things to pick for somebody doing something new that was born this century.
Go as an ecosystem is definitely large enough at this point that I would label it as low risk. Same with Rust. Neither is going anywhere and there are plenty of well motivated people working on keeping all that going. Same with Java. All valid reasons for using any of those. But nothing is set in stone in this industry. A lot of things people were using in the nineties did not age well. And it will be the same in another 30 years. Most of those old people that will never change retire at some point. Java projects are pretty depressing to work on these days for me. Lots of grumpy old people my age. I've had a few of those in the last few years. Not my idea of fun. The language is one thing but the people haven't aged well.
It's beyond wrong.
For example, at the core of plenty of numerical libraries is 30+ year old fortran code from netlib that works great.
It's just done.
It does what it's supposed to. It does not have meaningful bugs. There is no reason to change it.
Obviously, one can (and people do) rewrite it in another language to avoid having to have a fortran compiler around, or because they want to make some other tradeoff (usually performance).
But it's otherwise done. There is no need to make new releases of the code. It does what it is supposed to do.
That's what you want out of libraries past a certain point - not new features, you just want them to do what you want and not break you.
> Instead, think of your queries as super human friendly SQL.
I feel that comparison oversells things quite a lot.
The user is setting up a text document which resembles a
question-and-response exchange, and executing a make-any-document-bigger algorithm.
So it's less querying for data and more like shaping a sleeping dream of two fictional characters in conversation, in the hopes that the dream will depict one character saying something superficially similar to mostly-vanished data.
The author seems misinformed about the purpose of TPM to DRM schemes.
The purpose of a TPM, in this case, is not to provide encryption, but instead to provide so-called ‘authenticity’. A TPM with its attestation capabilities can allow a remote validator to attest the operating system and system software you are running via the PCRs which are configured based on it, with Secure Boot preventing tampering. [1] Google tried to implement APIs to plug this into the Chrome browser, which was later abandoned after backlash. [2]
In this case, the TPM can allow services like Netflix or Hulu to validate the hardware and software you are currently running, which provides the base for a hardware DRM implementation as stated in the article. Don’t be surprised if your non-standard OS isn’t allowed to play back content due to its remote validation failing if this is implemented.
TPMs also have a unique, cryptographically verifiable identifier that is burnt into the chip and can be read from software. This allows for essentially a unique ID for each computer that is not able to be forged, as it is signed by the TPM manufacturer (in most cases Intel/AMD as TPMs on consumer hardware are usually emulated on the CPUs TEE). If you were around for the Pentium III serial controversy, this is a very similar issue. It's already used as the primary method of banning users on certain online video games, but I wouldn’t be surprised to see it expand to services requiring it to prove you aren’t a “bot” or similar if it gets wider adoption.
There is a great article going more into detail about the implications of TPM to privacy from several years ago, which was the basis for this reply. [3]
Curiosity and free time. You learn stuff like this by reading tens of thousands of lines of text and code for every line of code that you write.
I've always been all about the hidden fun stuff. The magical little programs that somehow configure audio cards. The ALSA mixer tool for example does it via special ioctls. I was reading its source code not too long ago. The manuals said those definitions were for the curious and that those ioctls were private, as though it was the library's author exclusive privilege to use those things. I seriously hate it when they say that. When they imply I'm some mere mortal who's better off using the libraries that were gifted to us by the gods of programming.
Good or bad, quite a bit of hubris is involved. Takes a certain audacity to think I can make a better wheel than people who are probably much smarter than I am. Sometimes I start projects just to prove to myself that I'm not clinically insane for thinking a better way is possible. Sometimes it works, sometimes it doesn't. Someone once called an idea I had schizophrenic. I'll never forget that day.
This Linux system call stuff started after I read an LWN article about glibc and Linux specific system call support, getrandom to be specific. Took glibc years to add support. I started a liblinux project because of that article. The idea was to get rid of libc and talk to Linux directly. In order to accomplish that, I was forced to learn a lot of compiler, linker and executable stuff. The musl libc source code taught me a lot.
It seems like the C library is doing a huge amount of stuff but it turns out you don't actually need most of it. Linux just puts your binary in memory and jumps into some address specified in the ELF header. Normally this when the C library or dynamic linker takes over in order to prepare to call main(). Turns out I can just replace all that with some simple code that calls a function and then exits the process when it returns. It just works. I won't have init/fini section processing but I can live with that, that's harmful stuff that shouldn't even have been invented to begin with.
Safety is not an extra feature a'la carte. These concepts are all inter-connected:
Safety requires unions to be safe, so unions have to become tagged enums. To have tagged enums usable, you have to have pattern matching, otherwise you'd get something awkward like C++ std::variant.
Borrow checking works only on borrowed values (as the name suggests), so you will need something else for long-lived/non-lexical storage. To avoid GC or automatic refcounting, you'll want moves with exclusive ownership.
Exclusive ownership lets you find all the places where a value is used for the last time, and you will want to prevent double-free and uninitialized memory, which is a job for RAII and destructors.
Static analysis of manual malloc/realloc and casting to/from void* is difficult, slow, and in many cases provably impossible, so you'll want to have safely implemented standard collections, and for these you'll want generics.
Not all bounds checks can be eliminated, so you'll want to have iterators to implement typical patterns without redundant bounds checks. Iterators need closures to be ergonomic.
…and so on.
Every time you plug a safety hole, it needs a language feature to control it, and then it needs another language feature to make this control fast and ergonomic.
If you start with "C but safe", and keep pulling that thread, nearly all of Rust will come out.
The literature does not always put the “tracing” in front of the “garbage collection”.
For example, nobody says that Objective-C is garbage collected just because it has ARC. Nobody says that C++ is garbage collected even though shared_ptr is widespread. And systems that do tracing GC just call it GC (see for example https://www.oracle.com/webfolder/technetwork/tutorials/obe/j...)
To think clearly about the tradeoff between GC and RC it’s important to acknowledge the semantic differences:
- GC definitely collects dead cycles.
- RC knows exactly when objects die, which allows for sensible destructor semantics and things like CoW.
- it’s possible to use RC as an optimization in a GC, but then you end up with GC semantics and you still have tracing (hence: if it’s got tracing, it’s a garbage collector).
It’s a recent fad to say that RC is a kind of GC, but I don’t think it ever took off outside academia. Folks who write GCs call them GCs. Folks who do shared_ptr or ARC say that they don’t use GC.
And its good if this fad dies because saying that RC is a kind of GC causes folks to overlook the massive semantic elephant in the room: if you use a GC then you can’t swap it for RC because you’d leak memory (RC would fail to delete cycles), and if you use RC and swap it for a GC then you’d leak resources (your destructors would no longer get called when you expect them to).
On the other hand, it is possible to change the guts of an RC impl without anyone noticing. And it’s possible to change the guts of a GC while preserving compatibility. So these are really two different worlds.
This is probably related to the issue with NULL - NULL mentioned in the article.
Imagine you’re working in real mode on x86, in the compact or large memory model[1]. This means that a data pointer is basically struct{uint16_t off,seg;} encoding linear address (seg<<4)+off. This makes it annoying to have individual allocations (“objects”) >64K in size (because of the weird carries), so these models don’t allow that. (The huge model does, and it’s significantly slower.) Thus you legitimately have sizeof(size_t) == 2 but sizeof(uintptr_t) == 4 (hi Rust), and God help you if you compare or subtract pointers not within the same allocation. [Also, sizeof(void *) == 4 but sizeof(void (*)(void)) == 2 in the compact model, and the other way around in the medium model.]
Note the addressing scheme is non-bijective. The C standard is generally careful not to require the implementation to canonicalize pointers: if, say, char a[16] happens to be immediately followed by int b[8], an independently declared variable, it may well be that &a+16 (legal “one past” pointer) is {16,1} but &b is {0,2}, which refers to the exact same byte, but the compiler doesn’t have to do anything special because dereferencing &a+16 is UB (duh) and comparing (char *)(&a+16) with (char *)&b or subtracting one from the other is also UB (pointers to different objects).
The issue with NULL == NULL and also with NULL - NULL is that now the null pointer is required to be canonical, or these expressions must canonicalize their operands. I don’t know why you’d ever make an implementation that has non-canonical NULLs, but I guess the text prior to this change allowed such.
The definitions in the floating point standard make much more sense when you look to 0/INF as "something so close to/far from 0 we cannot represent it", rather than the actual concepts of 0 and infinity.
"Both the fetch+decode and op cache pipelines can be active at the same time, and both feed into the in-order micro-op queue. Zen 4 could use its micro-op queue as a loop buffer, but Zen 5 does not. I asked why the loop buffer was gone in Zen 5 in side conversations. They quickly pointed out that the loop buffer wasn’t deleted. Rather, Zen 5’s frontend was a new design and the loop buffer never got added back. As to why, they said the loop buffer was primarily a power optimization. It could help IPC in some cases, but the primary goal was to let Zen 4 shut off much of the frontend in small loops. Adding any feature has an engineering cost, which has to be balanced against potential benefits. Just as with having dual decode clusters service a single thread, whether the loop buffer was worth engineer time was apparently “no”."
It sure seems to be that way with pure software products. I'd not be surprised if is the same with embedded software in things like cars if they can be easily updated.
Back in the days before home internet was common I worked at a place that developed software for personal computers. The software was sold in retail stores such as Egghead, CompUSA, etc on floppy or CD.
Management was very insistent on high software quality. If a bug got through that was bad enough that an update to existing customers was needed the only option, aside from the small number of customers who would be able to use a modem to download an update from a dial up BBS, would be for us to mail them a floppy or CD with the update.
Even minor bugs that could be easily worked around by the customer were bad, because we had no means of contacting our customers except by writing or calling those who had filled out and returned in the prepaid registration card that was in the box when they bought the software. Instead what would happen is they would call our toll free support line.
Remember, this was not a subscription service. It's a one time purchase. If someone needs a long support call to solve a problem that could easily wipe out all profit made on that customer and more.
This wasn't just about bugs. If they had trouble using some feature due to poor UI or inadequate documentation that also meant calls to the toll free support line and so management also was very insistent on clear, well written, comprehensive documentation.
Compare to today. Updates are easy, you can have FAQs and searchable support articles on your website, someone will make a Reddit group where your customers can help each other, there will be bloggers and YouTubers posting tricks and solutions that in the old days would have been in the manual, and no one expects everything to work in any given release.
I see various comments below along the lines of “oh, the article is missing so and so”. OK… then please see the other articles in this series! I think they cover most of what you are mentioning :-)
Some of these were previously discussed here too, but composing this in mobile and finding links is rather painful… so excuse me from not providing those links now.
Hyrum's Law is one of those observations that's certainly useful, but be careful not to fixate on it and draw the wrong conclusions. Consider that even the total runtime of a function is an observable property, which means that optimizing a function to make it faster is a breaking change (what if suddenly one of your queues clears too fast and triggers a deadlock??), despite the fact that 99.99999999% of your users would probably appreciate having code that runs faster for no effort on their part.
Therefore it's unavoidable that what constitutes a "breaking change" is a social contract, not a technical contract, because the alternative is that literally nothing is ever allowed to change. So as a library author, document what parts of your API are guaranteed not to change, be reasonable, and have empathy for your users. And as a library consumer, understand that making undocumented interfaces into load-bearing constructs is done at your own risk, and have empathy for the library authors.
This is the issue I have with the "build vs buy (or import)" aspect of today's programming.
There are countless gems, libraries or packages out there that make your life easier and development so much faster.
But software (in my experience) always lives longer than you expect it to, so you need to be sure that your dependencies will be maintained for that lifetime (or have enough time to do the maintenance or plug in the replacements yourself).
Spatial memory is a very useful and important part of navigation. One thing I (and probably we all) miss is an ability to create nameless folders. Well, we sort of can, with “New folder (24)” and “dklstnrigwh”.
I’d like to use spatial sorting more and in regular folders, but that’s very fragile and second-thought in most UIs. Also they provide no visual cues, so I can’t leave my file in the left black drawer with a flower sticker on it.
Yes I know about Bob and how it failed. But it works for me in e.g. Fallout games where I can use multiple boxes in different rooms to sort my loot. I just know that my plasma rifle is in the metal box near the bed and cells are in the dresser.
The ability to find things without creating, remembering and searching the names or containers feels good.
QNX is a microkernel-based real time operating system. The kernel is tiny; it was about 60KB (not MB) twenty years ago. All the kernel does is message passing, timers, CPU dispatching, and memory allocation. Everything else is in user space, including file systems.
Everything is a message, rather than being a file. Messaging works like a function call - you send a block of data to another process, wait, and get a reply back. Processes which receive messages act like servers - they have a thread or threads waiting for incoming requests, and when a request comes in, it's handled and a reply is sent back. It's a microservices architecture.
Unlike Linux, it's a fast microservices architecture. Message passing and CPU dispatching are integrated. When a process sends a message to a service process, the sender blocks. (Timeouts are available.) If the service process isn't busy, control immediately transfers to the service process without a trip through the CPU dispatcher. When the service process replies, the reverse happens, and control goes back to the original sender. With most inter-process communication systems, there's queuing delay for this kind of operation. QNX got this right. This is the core insight behind QNX.
Yes, there is message passing copying overhead. In practice it's under 10%. I've sent video through the message system. Copying is less of a problem than you might expect, because, with today's large CPU caches, the data being copied is probably warm and in cache.
All this is real-time aware. Higher priority processes get their messages through first. If you call a service, it inherits the caller's priority until it replies. This prevents priority inversion, where a high priority process calls a low priority one and gets preempted by lower priority work. This works so well that I was able to do compiles and web browsing on a single-CPU machine that was also running a robot vehicle.
There's a tool for building boot images. You put in the kernel, a standard utility process called "proc", and whatever else you need available at startup. For deeply embedded systems, the application might go in the boot image. It's all read-only and can execute from ROM if needed, which is good for applications where you really, really want startup to always work.
Files and file systems are optional. For systems with files, there are file system and disk drivers to put in the boot image. They run in user space. There's a standard startup program set that creates a Unix-type environment at boot time. This is all done in user space. The file system is accessed by message passing.
System calls look like POSIX. Most of them are implemented in a library, not the kernel. Service processes do the work.
When an application calls POSIX "read", the library makes an interprocess function call to the file system or network service server. Program loading ("exec") is done in user space. The boot image can contain .so files. "Exec" is done by a .so file that loads the image. So the kernel doesn't have to worry about executable format, and program loading is done by untrusted code.
Because it uses POSIX, most command line UNIX and Linux programs will compile and run. That's QNX's big advantage over L4. L4 is a good microkernel, but it's so stripped down it's just a hypervisor. Typically, people run Linux on top of L4, so all the bloat comes back.
There is no paging at the OS level. That would kill real-time. There's a paging to disk library that can be used by programs such as compilers with big transient memory needs, but most programs don't use it. The effect is that responsiveness is very consistent.
I miss using QNX desktop. There's no lag.
The most important operation in QNX is MsgSend, which works like an interprocess subroutine call. It sends a byte array to another process and waits for a byte array reply and a status code. All I/O and network requests do a MsgSend. The C/C++ libraries handle that and simulate POSIX semantics. The design of the OS is optimized to make MsgSend fast.
A MsgSend is to another service process, hopefully waiting on a MsgReceive. For the case where the service process is idle, waiting on a MsgReceive, there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler. The receiving process inherits the sender's priority and CPU quantum. When the service process does a MsgReply, control is transferred back in a similar way.
This fast path offers some big advantages. There's no scheduling delay; the control transfer happens immediately, almost like a coroutine. There's no CPU switch, so the data that's being sent is in the cache the service process will need. This minimizes the penalty for data copying; the message being copied is usually in the highest level cache.
Inheriting the sender's priority avoids priority inversions, where a high-priority process calls a lower-priority one and stalls. QNX is a real-time system, and priorities are taken very seriously. MsgSend/Receive is priority based; higher priorities preempt lower ones. This gives QNX the unusual property that file system and network access are also priority based. I've run hard real time programs while doing compiles and web browsing on the same machine. The real-time code wasn't slowed by that. (Sadly, with the latest release, QNX is discontinuing support for self-hosted development. QNX is mostly being used for auto dashboards and mobile devices now, so everybody is cross-developing. The IDE is Eclipse, by the way.)
Inheriting the sender's CPU quantum (time left before another task at the same priority gets to run) means that calling a server neither puts you at the end of the line for CPU nor puts you at the head of the line. It's just like a subroutine call for scheduling purposes.
MsgReceive returns an ID for replying to the message; that's used in the MsgReply. So one server can serve many clients. You can have multiple threads in MsgReceive/process/MsgReply loops, so you can have multiple servers running in parallel for concurrency.
This isn't that hard to implement. It's not a secret; it's in the QNX documentation. But few OSs work that way. Most OSs (Linux-domain messaging, System V messaging) have unidirectional message passing, so when the caller sends, the receiver is unblocked, and the sender continues to run. The sender then typically reads from a channel for a reply, which blocks it. This approach means several trips through the CPU scheduler and behaves badly under heavy CPU load. Most of those systems don't support the many-one or many-many case.
Somebody really should write a microkernel like this in Rust. The actual QNX kernel occupies only about 60K bytes on an IA-32 machine, plus a process called "proc" which does various privileged functions but runs as a user process. So it's not a huge job.
All drivers are user processes. There is no such thing as a kernel driver in QNX. Boot images can contain user processes to be started at boot time, which is how initial drivers get loaded. Almost everything is an optional component, including the file system. Code is ROMable, and for small embedded devices, all the code may be in ROM. On the other hand, QNX can be configured as a web server or a desktop system, although this is rarely done.
There's no paging or swapping. This is real-time, and there may not even be a disk. (Paging can be supported within a process, and that's done for gcc, but not much else.) This makes for a nicely responsive desktop system.
AI is not remotely limited to Neural Networks.