After all of this hype this is the best they can do? This is the forefront company (arguable) of the forefront tech and no one can review slides before being shipped out? I think the reason why this has resonated with people is that it gives a "vibe" of not giving a shit, they'll ship whatever next slop generator they want and they expect people to gladly lap it up. Either that or they're using their own dog food and the result is this mess. Do the stats even matter anymore? Is that what they're banking on?
the distributed computing model is pretty nice in theory (maybe not in practice) and the uniform system APIs are also nice. The userspace tools in particular are just plain better (structured regex commands are quite a bit better than ed-style and I find myself using them far more frequently in vis than I do in vim, they're far more composable and intuitive).
The biggest thing is the heavy reliance on union file systems (and file systems in general) and an extremely simple syscall API. It's a heterogeneous-networked-node OS so it handles realistic workloads natively with primitives designed for it instead of piling complexity on top of Unix-like APIs (ie. Linux). I dunno, I just think a lot of the modern "cloud native" stack is unnecessary if you had an OS actually built for the workloads we have.
There aren't really union filesystems per se, the plan 9 kernel provides unions through its namespace model. In my opinion part of the reason why the userspace tools can be as nice as they are, are due to the use of file system interfaces and the simplistic syscall API. Could you elaborate more on the issues you see with the use of these?
In regards to using it for a "cloud native" stack, the issue is that people want to run code that isn't designed for Plan 9. You could build whatever backplane type thing you want out of plan 9 but the end goal is still likely to be to run some web app or REST api server. Unless someone does a great deal of effort to port all of those environments that people want (nodejs, modern python, etc) you're going to be stuck using a VM and losing a lot of the benefit.
This feels similar to what Joyent did with lxzones in SmartOS, where the backplane was solaris based but the apps they were running for clients were using Linux. It's hard to make the plan 9 backplane better enough to warrant dealing with integrating the guest and host environment.
> Unless someone does a great deal of effort to port all of those environments that people want (nodejs, modern python, etc) you're going to be stuck using a VM and losing a lot of the benefit.
It should not be a huge deal of effort since as you mention the plan9 syscall API is simpler than on Linux. The added plan9 support could then also serve as a kind of "toy" backend that could make the rest of the code more understandable in other ways.
I'd even argue that OP's early experiment with such a port of tailscale shows precisely such an outcome.
> There aren't really union filesystems per se, the plan 9 kernel provides unions through its namespace model.
Yes, this is what I'm referring to. It's really many filesystems unioned into one namespace that is controllable per-process.
> In my opinion part of the reason why the userspace tools can be as nice as they are, are due to the use of file system interfaces and the simplistic syscall API. Could you elaborate more on the issues you see with the use of these?
I didn't say I had any issues, I said I preferred them! Aside from a lack of familiarity and needing to install plan9ports on other systems, I haven't had issues.
> In regards to using it for a "cloud native" stack, the issue is that people want to run code that isn't designed for Plan 9. You could build whatever backplane type thing you want out of plan 9 but the end goal is still likely to be to run some web app or REST api server.
Right, language support is the biggest issue with running on Plan 9 from that perspective, at least for "server" workloads. Excluding graphical APIs, the basic stuff (file IO, networking, etc.) isn't all that hard to add to a language (it of course depends). The real trouble is things that have no equivalent in Plan 9, such as mmap and shm.
> This feels similar to what Joyent did with lxzones in SmartOS, where the backplane was solaris based but the apps they were running for clients were using Linux.
This is also what Oxide is doing. Their rack's OS is IllumOS but their customers are expected to only interface with the OS via their tooling and instead provision VMs.
> It's hard to make the plan 9 backplane better enough to warrant dealing with integrating the guest and host environment
If I were doing it, I would do it the other way! Run Plan 9 in a backplane/hypervisor and target it from the language level. The nice part is the systems programming model!
> Excluding graphical APIs, the basic stuff (file IO, networking, etc.) isn't all that hard to add to a language (it of course depends).
You could implement a modern graphical API on top of virtio-gpu, which would give you low-level access to accelerated graphics.
> The real trouble is things that have no equivalent in Plan 9, such as mmap and shm.
Some uses of mmap and shm actually seem to have a near-equivalent already in plan9's segattach. Other uses would require some implementation of distributed shared memory (i.e. implementing the usual CPU concurrency model over the network) to be made feasible while keeping to the usual networked-OS focus of plan9.
Right, you could do it... maybe. Some languages/libraries/runtimes could have specific expectations around the specifics of mmap that can't easily be papered over, but I suspect it would be a minority of cases
It could be used to replace k8s-based deployments (also Docker Swarms, etc.) since system interfaces on Plan 9 are namespaced and containerized "out-of-the-box" as part of its basic design (and this is one of the most prominent additions compared to *NIX). It's not a hacked-on feature as with Linux.
Yes you're correct, my apologies. There has been work on this going the other way as well: https://github.com/michaelforney/wl9. But there's still a lot more than can be done. There are vague plans to test the waters implementing something like this in to our vmx(1).
As the sibling comment has mentioned Unicode in DNS uses a punycode encoding but even further then that the standard specifies that the Unicode data must be normalized to NFC[0] before being converted to punycode. This means that your second example (decomposed e with combining acute accent vs the composed variant) is not a valid concern. The Cyrillic one is however.
[0] https://www.rfc-editor.org/rfc/rfc5891 § 4.1 "By the time a string enters the IDNA registration process as described in this specification, it MUST be in Unicode and in Normalization Form C"
Sure, but the security concerns of that I feel are much less concerning than having multiple domain names with the same visual appearance that point to different servers. That has immediate impact for things like phishing whereas lookalike path or query portions would at least ensure you are still connecting to the server that you think you are.
Normalization implementations must not strip variation selectors by definition. The "normal" part of normalization means to convert a string into either consistently decomposed unicode, or composed unicode. ie U+00DC vs U+0055 + U+0308. However this decomposition mapping is also used (maybe more like abused) for converting certain "legacy" code points to non-legacy code points. There does not exist a rune which decomposes to variant selectors (and thus these variant selectors do not compose into anything) so normalization must not alter or strip them.
source: I've implemented Unicode normalization from scratch
I write a lot of code for 9front within 9front, all of which is done through the sam editor. While sam does have a powerful general purpose editing language, it doesn't do syntax highlighting or any sort of language aware tooling (LSPs, jump to def, autocomplete, whatever). However I didn't start off with this, I'm not too old (in the second half of my 20s now) so I did take the tour through Java IDE's, tricked out vim configs and vs code as I was learning how to program. However when I moved to working on 9front I actually felt like the lack of these features made it easier for me to focus.
I like to think of code as not that much different than prose, they are both strings of text for communicating information, typically in a fashion of one thing after the other. I think most people would find syntax highlighting for prose to be more annoying than not (outside of perhaps seeing grammar rules for learning). Once I tried reading and writing code without syntax highlighting I found that it encouraged me to actually read and digest code instead of just skimming it. Compare it to reading prose with and without certain subsections highlighted.
Autocomplete strikes me as optimizing the wrong end of the problem. When I'm writing code I generally am spending a lot more time thinking about the problem space or considering possible implementations then I am having my fingers on the keyboard actively typing it out. In general I think the more you're able to think carefully about code in general the smaller it gets, so I find it hard to believe that by making it easier to quickly dump large amounts of text on the screen you're really gaining much. I think there should be a larger focus on reading and understanding code than writing it.
Stuff like code search is quite nice, and even in 9front we do have some scripts and tooling built-in to help us do that. We have programs like 'Bfn' which can search for a function and send it to your text editor, file names with line numbers can also be quickly sent to the editor as well. I think advancements in tooling that helps people move around in code are generally great, the time spent searching for something is generally not something I enjoy. This was perhaps the nicest part of LSPs in my experience. However I do also think that if you make it quite easy to jump around to lots of different files there is less of an incentive to carefully consider how you're laying out your code. How 9front works where there is some tooling to reduce the monotony but not enough to make it easy to traverse a couple million line java project strikes a nice balance for me.
A lot of discussion in this thread is pointing out that chromium is a thing and that it would be hard for a company to properly fund a web browser without the backing of a tech giant whose more direct revenue stream is elsewhere. I think this showcases a larger issue with the web as it stands today. Why has building a browser for the "open web" become such a complex piece of software that it requires the graces of a tech giant to even keep pace? Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own? Right now it seems this is only possible if you accept that you'll only be able to build on top of chromium.
I know the focus by the DOJ here seems to be more on search and less on the technical control that Google has over the web experience through implementation complexity, however I can only hope that by turning off the flow of free cash more "alternative" browsers are given some space to catch up. Things like manifest V3 show that Google is no stranger to tightening the leash if the innovation of web technologies impact their bottom line, I'd like to have a web where this type of control isn't possible.
That was the goal, not an accident. The length of the standard itself is comparable to medium-sized serious project kloc count.
They driven these numbers up to ensure that no one except them and their leashed pets could repeat it.
And here we are, you can have ten internet-enabled apps with texts, images and videos, basically the same functionality, but you can only copy nine of them.
Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own?
Sure, we can have the original web with text and the occasional embedded photo. But if you want what amounts to a full blown operating system, with a rock solid sandbox, plus an extremely performant virtual machine, that’s going to be a high bar.
> Can nothing be done to the web to lower the barrier to entry such that an independent group (a la OpenBSD or similar) can maintain their own?
Of course it can and it is done: Linux Foundation Europe runs Servo, GNOME Foundation runs WebKitGTK and Epiphany, Ladybird Browser Initiative runs Ladybird.
Despite existing since 2012, and getting funding from several companies, Servo development has been intermittent. It's now pretty usable, but it's not a success story in keeping pace with the tech giants.
> A lot of discussion in this thread is pointing out that chromium is a thing and that it would be hard for a company to properly fund a web browser without the backing of a tech giant whose more direct revenue stream is elsewhere.
This is not an issue though is it?
Like all those magazine subscriptions make their money off ads. The idea that a business can't survive on its own is fine, no?
If it's a singular tech giant then that's a problem but if chrome had contracts with like a dozen+ companies then it sounds really sustainable.
> Like all those magazine subscriptions make their money off ads. The idea that a business can't survive on its own is fine, no?
This is not quite the same, if a single magazine starts to become more ads than decent content it is not insurmountable for another company to start a competitor. It's not ad income itself that is bad, it's that in the case of a web browser it is insurmountable for a company to start up a competitor from scratch. It wasn't always the case, but because google has dumped so much engineering in to chrome they've effectively pulled up the ladder behind them.
I can't speak for OpenBSD specifically but I can speak to some of my thoughts in why an operating system continues to use C. Supporting a language ecosystem is not easy, the less "default" languages needed to bootstrap the core system the better. The nice part about C is that it's one of the few languages suited for both kernel space and user space. Out of the alternatives you listed the only language that could even seriously be considered for kernel space is rust, and even that took a lot of back and forth to get it to that point in the Linux kernel. Higher level languages have a larger range as assumptions and you have to tow those accomodations in to kernel space if you want to use them. There is also the issue of the memory management in kernel space being much more of a complicated environment than user space. How do I teach the borrow checker about my MMU bring up and maintenance?
I am also skeptical to your claim about removing memory bugs freeing up brain space for logic bugs, at least for Rust. Rust has grown quite a number of language features, that in my experience, result in a higher cognitive load compared to C. If you seriously reduce your reliance on the C macro system (as Plan
9 has shown possible), the language itself is quite simple.
I didn't directly mention third party software but when I talk about the various levels of default software the implication is that those with less built in typically rely more heavily on third party software. Even those who do ship a more batteries included base still have to provide mechanisms for using third party software given the ecosystem.
> ... has a lot of third party software that would be hard to maintain along with the rest of the system
This is the point that the article is trying to challenge. I think 9front proves that it's doable.
> Most people don't care what commit your system is built from as long as it works as their programs expect it to.
The former helps the later a lot. Everything is tested with each other and for a lot of functionality there is only one option.
Your link for 9front mentions that ssh2 is not included. This is because the code was rewritten and the program is now just called ssh(1). Other features of ssh are accessible through sshfs(4), and sshnet(4). The only difference in features compared to the original Plan 9 is that 9front does not currently have code for an ssh server. I know some users who are interested in this capability so it'll likely happen at some point.