Remember, you're comparing Linux and FreeBSD in 2022 but BSD lost to Linux much earlier, many years ago. Back when I was looking into them (long time ago, excuse me for not remembering the details), BSD felt more pleasant and coherent. But at the same time it had limitations on scalability, performance and compatibility with hardware and also with userland software. In every benchmark, especially on multi-core, multi-socket systems, Linux was ahead.
My theory at the time was this: GNOME won on developers' desktops, so most software was developed on Linux natively, with BSD compatibility (and performance) as an afterthought. IIRC Linus made a similar point on the mailing list that developers love servers that resemble their programming environments. TDLR: BSDs got stuck in CLI-only mode for too long.
The more common explanation was that Linux got a head start by a few years by being a clean-sheet implementation, while the BSD had to spend its early years purging itself off the AT&T copyrighted code, so it was untouchable from a commercial use perspective.
I remember why I chose linux in 1998 for my desktop, and would choose for my server.
Hardware compatibility. I could install Linux on my shabby work desktop, and it just worked. Actually it worked more stably than NT 4.
Binary distros. I could apt-get install stuff onto my box in minutes. I rarely had to build things from source.
Speed of change. Linux was acquiring features at a breakneck speed. Large companies started contributing. SMP, interesting networking stuff, better disk I/O, new filesystems, stuff like that. Hell, Windows emulation good enough to run StarCraft! It felt alive and cared for. It was apparent that many serious businesses want to bet big on Linux. Some say marketing; I say GPL and project guidance.
I also had a lovely server box with FreeBSD. It had select compatible hardware. It had really nice documentation. It ran Apache and Squid pretty well. I had to build the latter from source IIRC. I had to build a lot from source (slow in 1998). If that was not available as a buildable package, I often had to tweak header files to make it build. For many amenities which I took for granted on my linux box, I decided that it's too much hassle to make them built on BSD.
Features like SMP or journaling file systems were a bit late in FreeBSD. Maybe they were more solid, and achieved performance parity with Linux with time. Sadly, the industry largely made the choice.
I also find modern Linux a mess, and run a minimalist distro (Void) on my laptop. I could consider running BSD on a server, but most servers now have to run VMs and containers within them, most tooling just assumes Linux.
> I remember why I chose linux in 1998 for my desktop, and would choose for my server.
>
> Features like SMP or journaling file systems were a bit late in FreeBSD
Yep, it was timing for me - the Asus BP6 allowed dual Celerons but only Windows 2000 and Linux 2.6 supported SMP. I was using FreeBSD at the time, but had to move because of hardware support.
But I wouldn't say it was hardware support alone - I think GNU/Linux won because of the licensing - FreeBSD not having a viral license would have turned people off contributing only to have their code distributed closed source by companies... that's my take anyway.
We're far from the days of getting a free Redhat CD in a book, just popping it in a drive and restarting to have it start a Linux install. You're lucky now if you don't have to enable USB booting, register a hash/key, turn on a 3rd party cert or just turn turn off secure boot to get a USB drive to boot.
"With Linux, I just booted from a Linux boot floppy with my Linux install CD in the CD-ROM drive, and ran the installation. With BSD...it could not find the drive because I had an IDE CD-ROM and it only supported SCSI."
"It insisted on being given a disk upon which it could completely repartition. [...] Linux, on the other hand, was happy to come second after my existing DOS/Windows."
"By the time the BSD people realized they really should be supporting IDE CD-ROM and get along with prior DOS/Windows on the same disk, Linux was way ahead."
IDE was one of many hardware issues that just took too long to be solved. For a long time, BSDs didn't seriously try to support consumer-grade hardware - be it because of lack of manpower, conservative attitudes, or "commercial" choices. OpenBSD still doesn't support Bluetooth...
On the other side, the Linux community fought hard to get everything to work, creating positive loops: the more hardware it supported, the more people could get it to work on their hobby hardware, the more they'd become familiar with it and push for adoption at work.
Exactly, "commercial" choice. Any OS that wants to support consumer-grade hardware, in 2022, must provide Bluetooth; OpenBSD just doesn't care enough about the consumer market to do that.
I have to get an outlet for my masochism tendencies somehow, at least this doesn't leave scars - physical ones, that is.
I have to say though, my insistent trolling of the project on forums like these occasionally contributes to the push to progress - like the long-requested syspatch. I'm not sure what yours achieves.
Not sure what you could mean by “lost”, when BSD family operating systems are so widely adopted. I have more devices at home running such than I do Linux, not even counting Apple’s Mach hybrid. My internet traffic passes through more BSD-based than Linux-based devices as it crosses the globe, and my data resides on a variety of platforms that include both (and more besides).
These systems aren’t busy trumpeting their presence. Maybe the originators just don’t have fragile egos.
> My theory at the time was this: GNOME won on developers' desktops, so most software was developed on Linux natively, with BSD compatibility (and performance) as an afterthought.
when i tried FreeBSD a couple years ago i was actually surprised that gnome worked OOTB. i remember some of the settings were gated off (Bluetooth? can’t remember the specifics), but the base desktop worked exactly as before.
maybe different in the early days though: dunno, wasn’t there.
> Installing a 3rd party agent that in some way permits shell access to a server and (I assume) needs to run constantly is definitely going to raise a few eyebrows especially when the benefit of using is actually fairly low.
You're already running sshd. Teleport is a drop-in open source replacement, which offers some interesting features like certificate-only auth (removing the need for pubic/private keys), SSO integration, RBAC over SSH, support for protocols other than SSH (Kubernetes API and major OSS databases), and syscall level authorization and audit, so quite a few security teams have appreciated it lately.
Disclaimer: I work at Teleport and have been a maintainer for the first 3 years.
SSHD has a proven track record. A program (open source or not) that replaces a proven bastion of security while offering "some interesting features" is exactly the sort of thing a security team will balk at.
Software that's a "known quantity" that a lot of people can support is a big thing when it comes to security. I've not heard of Teleport until this post.
Don't get me wrong, I'm sure Teleport is wonderful at what it does, probably more secure than SSHD etc. But it hasn't earned that trust amongst a wide base yet so people will be hesitate to use it. Hell even mosh can be hard to get as a trusted thing.
Part of me thinks that we have reached the tipping point where all cloud dev tooling needs to be thrown away and we need to start over.
I had the same feeling when we reached the pinnacle of complexity of Windows programming with COM/DCOM/ActiveX/.NET/WinForms/Silverlight/Visual Studio... All of that felt like necessary progress. Yet, a simple script piping text output into a browser via CGI felt like a breath of fresh air. We need this for web development now.
> but things changed with the advent of microservices.
Microservices is just a marketing buzzword invented by container orchestration start-ups. Partitioning of large applications across multiple inter-connected processes has been around for decades. Your computer is packed with microservices.
On Linux, type `watch date` and enjoy two microservices running and interacting. On Windows, observe a dozen of svchost.exe processes in the Task Manager.
I have the impression that the microservice architecture lowers the... 'activation energy', if you will, of building components that operate in a different paradigm. It's intensely pursuing the idea that a subservice is only its interfaces. And behind each of those curtains, it would be practical to construct something really weird.
Of course, it's cheaper and intellectually less stressing to just pop up a container, so that's what everyone does... so far. I mean, you need a TCP/IP stack somewhere.
Perhaps a microservice component embodied entirely on a GPU is an easily accessible example, or on one of those fancy ML-specific processors in the bowels of Google..
i don't know about marketing, but things are much more simple, when everything is running on the same machine. There are fewer points of failure, ipc latencies are significantly lower, you are looking at the same system clock (unless you are looking at rdtsc, which is core specific). It's not quite the same, in practical terms.
We got some of the distributed aspects, when we got multiple processor cores and multiple cpus, but it's still not exactly the same thing.
> Maybe it's a result of wealth inequality and monopolization?
Well, the author did compare the current rate of innovation to the dawn of the 20th century, when (according to him) we had plenty of new ideas. But wealth inequality in 1900-1920 was comparable to, if not higher, than today. The raise of the middle class in the US didn't happen until after the WW2.
Additionally, and admittedly without evidentiary support, we could argue that two biggest suppliers of capital investment for innovation used to come from business and Government and largely still do.
However, business has focused on short-term gain for investors and concentrating wealth into the hands of its upper Executives.
Government, at least in the US, has worked itself into a position of partisanship such that deadlock and preventing the other side from doing anything seems to be more important.
These are both short-sighted views and approaches. Scientific discovery takes time, requires consistent investment and a longer-term vision. Quite honestly, I don't think it's a radical view to take to blame the current climate of business and Government in the US for many of the issues we see elucidated here. That being said, the solution is as complex as the problem and would require a major restructuring of both the economy and our Government, IMO.
I think you're spot on with this analysis. I would add that the gridlock in the U.S. government isn't just happenstance, either. An ineffective government benefits Capital (at least in the short term), and thanks to lax lobbying and campaign finance laws, Capital is able to buy enough politicians to ensure that any legislation which would challenge the status quo dies before it reaches the President's desk. Gridlock serves the political and economic elite well, so they're invested in keeping it that way.
I wonder if a big difference between now and the start of the 20th century is precisely the insane focus on short term return, what with the feedback loops being so much quicker and visibility so much higher
I don't know what you mean "the government" worked its way into a period of partisanship. Sure there is terrible deadlock. One party has moved to an extreme and is trying to dismantle democracy so that it can force things it's way. Only one party has gotten to the situation that previous presidents and leaders in the party are almost uniformly against the future next president from that party.
In my eyes, the partisanship is the deadlock. I didn't want to make it a political discussion or turn it into a flame war over blaming one party over another. Both parties, or members on each side, have a vested self-interest in maintaining that deadlock. That's really all I meant.
Education is no longer the ticket for poor people to escape. The rich invest in getting their kids to university. We are creating a self-sustaining nobility again.
On the one hand, the fourth edition includes an introduction to deep learning that's as up-to-date as it's possible for a printed book to be. On the other, it's placed late in the book, and many people these days would skip straight to the neural nets. Up to you. I think it's valuable for condensing an incredible amount of stuff in an accessible unified introductory way.
Right, if your goal is to understand the broad field of artificial intelligence, AIMA is great. If your goal is to understand ML in particular, then there are a bunch of other more applicable books (Deep Learning, Learning From Data, probably a bunch of more recent ones that I don't know because I haven't kept up super recently)
Disagree. We pay for convenience all the time. I just did in buying a synology NAS even though I could have pieces together my own solution (and have in the past). Having a working, maintained, stable, full featured email server that interops with the world isn’t something you can do quickly on your own even if you have the skills.
Of course, I agree with that. But this is, at least in my opinion, a very niche product. People who even understand what the product is would probably not be willing to pay a subscription for something they could setup on their own. Of course I could be completely wrong :)
Email is hard to do right, first I’m likely going to spend 2-3x the price on a server, so instantly we have 5 years of subscription covered by that price.
Then I have to buy an IP in a space that has a good reputation. Then I need to setup offsite backups, setup TLS and DKIM, plus a lot of stuff I’m sure I’m missing. Then I have to stay on top of patches and general maintenance. Plus I have to buy a domain name. Suddenly we’re looking at let’s say a 10 year lifespan before you need to upgrade. You are probably going to be basically even on costs but home built has a hundred of hours sunk into it too.
It's true that email is hard to do right, and there are lots of fiddly bits like DKIM. And you need to have control of your DNS. And, and and. I agree with parent's hundred-hour estimate.
It's not true that you need to spend $200-300 on a server; an email server for home use runs fine on a low-powered fanless Intel Next, for example.
Hardly anyone does offsite backups of their home server, unless they do it by relying on an online backup service. What's the threat-model? Is it that a fire might destroy their server, as well as their in-home backups? OK, backup to a flash key, and give it to a friend to look after.
I care about owning my own data very much. This makes me conservative when it comes to these solutions, despite otherwise being an early adopter of everything tech.
For that reason I always recommend Synology NAS machines. They have been around forever, they work for years on autopilot and feel very similar to a microwave in terms of operational overhead. One-time purchase. No subscriptions. But most importantly, the ecosystem is stable and mature. And they are easy to understand and reason about and come with a slick UI with mobile apps. My favorite feature is having my massive photo collection always available on my phone, served from my own basement (with encrypted AWS Glacier backups).
[EDIT] This is Brandon Phillips of CoreOS fame sharing this! Maybe I should take a closer look then.
In a very boring and traditional way: you buy a domain name, configure dynamic DNS, and then use port forwarding in your home firewall. No 3rd party proxies.
To be clear, the storage app is not bound to the port, rather it’s a VPN server.
It’s how most companies operate (RBAC behind a private key challenge).
Exposing anything to the internet carries intrinsic risk, but exposing a VPN door is among the least risky of the available options, if internet accessibility is an essential feature. The only realistic compromise vectors are private key disclosure, bad VPN configs, or operating an outdated version of WireGuard with a known vulnerability.
What I do is use L2TP/IPsec VPN to phone into my home network and then login and use Synology NAS "locally". There's no inherent need to open your NAS to the internet if you don't want to.
Interestingly, this attitude used to be default even here on Hacker News ~5 years ago. I am so glad to see it's changing. Why I'm finding this interesting? Because this audience always knew what's going on even without layman articles like this, but did not care for some reason. This shows how just knowing isn't enough sometimes. Public sentiment matters.
I think HN rules are still behind the times. One of the rules discourages multiple accounts, in an attempt to create a community. OTOH: a longer paper trail means more easy to identify. This is why I create a new account every few months. Since I cannot delete old posts to "cover my tracks", I have no choice if I want to use the site. Maybe I just shouldn't use HN. I dunno. I learn a lot from discussions, but I don't always tell the truth because I don't want to be traced.
I've got some sympathy with this viewpoint, and have used HN myself under a pseudonym I take pains not to associate with my real-world identity.
But at the same time, the practice of regularly and routinely recycling user identities is ... well, it really does prevent the formation of a community.
The most toxic community I'd ever encountered was a supposedly "kinder and gentler Reddit", the late and unlamented Imzy. A core feature was that individuals could spin up a new pseudonym on each individual thread.
The result was both absolutely disorienting and gave rise to vicious bandwagon and brigade attacks.
Whatever problem Imzy was trying to solve, that was the wrong solution.
(I'm aware that chans often follow a similar tactic, and that ... they tend not to engender highly constructive behaviours.)
On the other hand, Twitter may be one of the greatest evils with regard to social media, and is well populated with non-anonymous accounts.
And of course, HN stands at odds with this theory as well. No one "knows" me on HN. I don't have a reputation, or a real identity, and I'm cordial enough. (I hope) HN enforces conduct, and this enforcement is not defeated by anonymity.
Another take is that those running HN know what kind of forum they want (one that promotes community) and understand and accept that trade-off. 'dang has written about this at length, so I think that's very likely the case.
That's not one you're willing to make and you adjust your behavior accordingly. HN can't be all things to all people. And that's okay.
You clearly find some value in HN as it is because you continue to use it. Something to consider: changes you might like to see may very well change the community as a whole to make it less a place you want to be. Hard to say, without running the experiment, but one of the hazards is that running the experiment could irreparably damage/change HN. And rebooting it would be likely nigh impossible. (If it were easy, we'd all create the fora we wanted.)
And public sentiment is not the same everywhere - for instance, the US tends to be more suspicious of government than of (large, public) corporations whereas elsewhere it's the complete opposite.
My theory at the time was this: GNOME won on developers' desktops, so most software was developed on Linux natively, with BSD compatibility (and performance) as an afterthought. IIRC Linus made a similar point on the mailing list that developers love servers that resemble their programming environments. TDLR: BSDs got stuck in CLI-only mode for too long.
The more common explanation was that Linux got a head start by a few years by being a clean-sheet implementation, while the BSD had to spend its early years purging itself off the AT&T copyrighted code, so it was untouchable from a commercial use perspective.