For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more cesarb's favoritesregister

Having files double as a launcher for some program was a mistake, the result of which is that we not how have to sit through those awful trainings about not clicking untrusted files.

Just open the program first.

People should fear untrusted programs, but should be at ease viewing data through programs that they trust.


I can answer the Rust question. In the ancient era, Rust source code was split into two types of files: .rc files and .rs files. The latter were "Rust source" files, and the former were "Rust crate" files. The crate files were essentially header files; they defined the public interface of the crate by explicitly listing items to be exported from the crate. Eventually explicit export lists were removed (and replaced with visibility modifiers on items themselves), and so there was no reason for .rc files to stick around, and nobody really ever objected to .rs as the now-universal extension for Rust files, since it's still a valid abbreviation.

Even more simply: let's not get dragged into debating the technical merits of something which is philosophically wrong. That particular person was trying really hard to reframe the whole argument into terms that would work in Google's favor -- if we just pointed out what was technically wrong with the proposal, why, they'd just fix those things and then we'd be good to go. But, it was philosophically wrong, because we've all understood the web to be fundamentally open and anonymous and not controlled by any one entity [1] for decades and most of us want it to stay that way. Even if WEI was technically sound, it would still be an enormous erosion of the principles of an open, anonymous, decentralized web. Any attempt to argue against WEI on its technical merits alone was just allowing Google to drag the whole fight into favorable territory.

> And why would I go into discussion with Google? They don't own the web and never will.

Oh, I think this is a big mistake. Google very nearly does own the web. Gmail handles, at last estimates, between 45% and 60% of email traffic, depending on who you talk to. Chrome or Chromium gets somewhere around 65% of the global browser market. Google gets around 90% of search traffic. Google ads. Google domains. YouTube. Google Cloud, which WPEngine for example runs on. Google Docs. Chromebooks. Android.

I really need more people to pause and reflect for a moment on just how much of the internet is currently owned by Google.

[1]: Well, ignoring ICANN, or Microsoft, or Google, or Cisco, or...


> Every year I secretly hope some major consortium with traction release "IPv4 2.0" with 8 bytes and get done with it.

How exactly would this work?

IPv4 data structures have four bytes/octets (4B) for addresses. So how do you fit 8B of addresses in 4B structures? You don't. So you have to update every network element—host (desktop, laptop, mobile, embedded), router, switch, firewall—to have a new data structure (and maybe new function/system calls, as the old ones assume the old structures). So all devices have to have updated network stacks, including long-lived ones that sometimes are not touched for a decade+.

And not just pure networking code: anything that touches (e.g.) DNS as well, as A records are 4B-only as well, so you need a new record type and deploy new DNS server and resolver code everywhere.

But of course if you have a 4B-only network elements/devices, they cannot talk to 8B-only devices/services, so you have to create translation mechanisms because some devices may not yet have the update 8B-capable network stack. And sometimes an 8B network is an island in a sea of 4B, so you have to have tunneling. Of course some may have both 4B and 8B, and want to talk to something has also has 4B and 8B, so now you have to have code for source/destination selection.

Those are the exact same challenges as with the IPv6 roll-out.

And of course "IPv4 1.0" continues to work, so what motivation is there for going to your "2.0"—just like what motivation is/was there for going to IPv6 when IPv4 is/was 'working'?

See "TCP and UDP with Bigger Addresses (TUBA), A Simple Proposal for Internet Addressing and Routing" for part of the discussion from back in the day (1993):

* https://datatracker.ietf.org/doc/html/rfc1347


So signals make sense once you understand where they came from.

The writers of Unix needed a kernel->user downcall mechanism. Back then, user mode code was thought of as being like what we know call unikernels with the kernel thought of a bit more like a hypervisor. The individual user sessions were, early on in Unix's history, only a couple more primitives on top of what we'd later think of as DOS style environments, simply multiplexed on the hardware. From that perspective, mirroring the microcode->kernel downcall mechanism (interrupts) at the kernel->user boundary makes plenty of sense.

Once you see them for what they are (interrupts for user mode instead of kernel mode), all of their goofiness makes sense.

* SIGSEGV, SIGILL, etc. map to processor exceptions.

* SIGIO, SIGWINCH, etc. map to I/O device interrupts, including SIGALRM as a timer interrupt.

* SIGSTOP, SIGKILL, etc. map to platform interrupts like NMIs.

* SIGUSR, etc. map to IPIs.

It's a one shot downcall mechanism that can happen at any time, so all of the same way you think about concurrency during an interrupt applies here as well. You can't take a regular mutex because it might already be held by the context you interrupted. Less of a deal now, but any non-reentrant code shared with the regular user context is off the table. So just like how in interrupt context, large swaths of the kernel are off limits, large swaths of user mode are off limits except very carefully written, very context dependent code.

I even ported an RTOS to run in *nix user mode for CI/development purposes, where the interrupt controller abstraction was simply implemented as signals. The code didn't look any different than you'd expect from running under a hypervisor with a paravirtualized interrupt controller.


As someone who works in the field and works with LLMs on the daily - I feel like there are two camps at play. The field is bimodally distributed:

- AI as understandable tool that power concrete products. There's already tons of this on the market - autocorrect, car crash detection, heart arrythmia identification, driving a car, searching inside photos, etc. This crowd tends to be much quieter and occupy little of the public imagination.

- AI as religion. These are the Singularity folks, the Roko's Basilisk folks. This camp regards the current/imminent practical applications of AI as almost a distraction from the true goal: the birth of a Machine-God. Opinions are mixed about whether or not the Machine-God is Good or Bad, but they share the belief that the birth of Machine-God is imminent.

I'm being a bit uncharitable here since as someone who firmly belongs in the first camp I have so little patience for people in the second camp. Especially because half of the second camp was hawking monkey JPEGs 18 months ago.


Well, high speed serial coms existed before USB, so the premise of the question is a bit wrong. RS-422 officially supports 10 Mbps over short distances, for example, and various serial WAN protocols supported >100Mbps over copper before USB launched.

I would argue that what USB (1.1-2.0) does differently from previous serial peripheral ports is mostly software and standardization, and really relate to making it cheap and simple for "normal people" to use:

1. USB 1.1 supports only two bit rates (1.5 and 12 Mbps), which is autoconfigured before device enumeration. 12 = 8 * 1.5, so the clock divider is cheap and easy.

2. USB limits cables to fairly short lengths (<=5m) compared to earlier serial ports (RS-422 supports 10 Mbps at 15m)

3. USB (pre-OTG extensions) rigorously enforced the idea of a "host" (upstream) and "device" (downstream), at both the protocol and physical connector level (which greatly simplifies things--for example, you can't create a loop, and don't need STP to detect it). A child can easily see that a B (device) socket doesn't fit an A (host) plug.

4. USB device enumeration and configuration are extensively software based, and USB defines a number of standard device classes so that many common types of devices (e.g., keyboards, mice) don't need specific drivers.

5. One thing that USB does that's less common among serial peripheral interfaces is to use a single differential pair for data going in both directions (it's time-shared: so the host polls the device then the device responds). The pin-count is less than a "regular" RS-422 port, but you still get the advantages of differential signaling.

6. USB carries power with only one more pin than needed to create a minimal bidirectional serial data connection, so "lite" devices don't need a separate power connector (ignoring very slow protocols like "1-wire").

7. USB does some funny things with packet framing (like NRZI encoding and bit-stuffing) and some things to help reduce device cost (like the JKJKJK packet preamble to sync the device baud-rate generator and using SE1/SE0 states for device disconnect and bus reset signaling), but none of that is really fundamental to making a 10Mbps-class serial interface.


I think people really need to think about USB Type A in the context of the connectors it was 'competing' with: DSUB9 (VGA), RS-232 (serial), DB-25 (parallel), PS/2 (keyboard/mouse). These were, by modern standards, complete garbage connectors. Huge. Didn't stay plugged in securely unless the connector had screws. Only went one way.

USB 1 stayed plugged in, required no screws, had higher bandwidth than any data port you'd find on a typical PC, the wires were much thinner, and it could even power low power devices! Yes, a design that doesn't take three tries (my average) to get right would be nice, but it exceeded every existing port by miles. Audio jacks were the only ports that didn't have a direction back then.


You know how food tastes better when you’re hungry? That’s the element of nostalgia that is still unaccounted for in the emulation/preservation scene.

Most of us who grew up playing games were hungry for it - our parents allowed us a fairly limited supply of games, on their budget and schedule, and they probably put limits on when and how long we could play them for, right? Maybe we had to travel to a friend’s house because they were the only one in the neighborhood with a particular game or console. Furthermore, entertainment options in general were limited, and the games offered a genuine novelty that you couldn’t get anywhere else - even the crappy ones. Thus we savored every opportunity to play, and the satisfaction of that hunger is what our nostalgic memories are built on.

Contrast to today, where you can download NES.zip and possess literally every game on that platform, almost instantly. As an adult, you can play as much as you like. But you probably don’t even want to play that much because you’ve also got a universe of streaming TV and movies and media and a back catalog of AAA-games a mile long and VR and gadgets and social media and a million other entertainment options, distractions and responsibilities. We’re not hungry at all - we’re obese from media and stuffed to the gills with it.


People only think large blobs don't belong in VCS because they don't work well with Git.

As soon as a VCS comes along that actually handles that properly people will say "of course, it was obvious that it should have been like this all along!".

Git LFS is a proof of concept, not a real solution.

Unfortunately none of the new Git alternatives I've seen (Jujitsu, Pijul etc) are tackling the real pain points of Git:

* Submodule support is incomplete, buggy and unintuitive

* No way to store large files that actually integrates properly with Git.

* Poor support for very large monorepos where you only want to clone part of it.

In a way, Git is bad at everything that centralised VCS systems are good at, which isn't surprising given that it's decentralised. The problem is that most people actually use it as a centralised VCS and want those features.


>Cars are the one thing I don't want to buy. It's not an asset. It depreciates.

If you look at a car as an investment, "you're holding it wrong" is the best I can come up with as to how I feel about it. To me, a car is just a really specialized tool. I don't expect my wrenches and socket sets to appreciate in value the longer I hold them. Cars are made by the thousands every day/week/month/year. We don't expect our mobile devices/laptops/desktops to appreciate either. This whole depreciating argument confuses me. Maybe some people just have a hard time equating something with that kind of price tag as simply a tool?

There are very few cars that are investment worthy, and if that's the kind of car you're after, then so be it. But to the 99.99999% of people looking for a car, it is simply something to serve a purpose.


Three years ago, I migrated from Gmail to FastMail because I was afraid of losing access to my digital life on Google's whim.

Two years ago, I found out that my favorite Youtube creators were all on Nebula.

One year ago, I switched my phone to LineageOS to get security updates a little longer.

A month ago, I installed OpenStreetMaps because Google Maps got really bad at showing points-of-interest.

And today, Kagi removed the only obstacle that kept me on Google Search. I'm looking forward to building my filter list.

After accidentally de-googlifying myself, I might ditch Windows next. It feels really nice using products that respect me, as opposed to services that are actively hostile because of advertisers.


Yeah, I totally agree. Computers for a long time were not a communications device the way they are today. You could buy a modem and use it communicate, but that was a distinct activity and once exited the terminal program you were once again by yourself.

Now, the communications function is this always on thing that never goes away. It's interwoven with every other activity you do. That makes the whole experience much different.


The biggest advantage I found when learning on DOS and Windows in the 1990s was the overall lack of distractions. No smart phone, YouTube, Reddit/HackerNews let me fully concentrate.

That may be true, but the point was about empowering the end user. Making programing popular through the app store (arguable) is a separate (also important) topic.

Empowering the end user to not look at the computer as a black box that they have no idea about how it works is quite freeing and mind-expanding.

In the long term, I do believe there is a real risk for programming to become a niche where you end up asking for permission from the h/w vendor before you are allowed to write code on the device. We're slowly heading in that direction with app stores being bundled with the OS, and we may end up in a situation where you can only install s/w through the app store. And only "authorized" persons can download IDEs and dev tools. A large population that has no concept of programming is likely to not oppose this because the vendor will throw the security/privacy boogeyman at them, about "unauthorized" developers writing software that can harm them.


> And yeah you can forgive all this by quibbling that git was written in a time when internet access was not ubiquitous

I don't think that's right. Git's remote concepts were built heavily on even more ubiquitous internet access than you are assuming, to some extent. Git was built where some upstreams were patches on email mailing lists. Git was built in environments where every contributor could relatively easily stand up a small server of their own (even as just a temporary server on a personal device with specified time windows) and you might have branch remotes tied to different colleagues' servers in a distributed fashion, the D in DVCS.

At the time those weren't advanced features for advanced users, those were simple features for flexible source control. There's a sort of simplicity to pull requests in email flows. There's a sort of simplicity in "hey, can you check out my branch and make notes on it, I'll serve it on my lab machine for a couple of hours so you can grab it, here's the URL." In some of those cases you don't even care to remember that remote URL after you've grabbed the branch because it will be a different IP address and port the next time they bring up that lab machine. (Git was a built in a world where there was no "origin" and multiple repos were valid representations of progress, some of them transient and as-needed, and "origin" was a name and concept that came later.)

Some of that only exists in a world that assumes internet connectivity is ubiquitous, not just access, but service hosting and upload capabilities. The internet has some strange centralizing forces making access easier but anything other than raw consumption harder.

There are certainly a lot of good reasons for some of that centralization. Whether or not is "simpler", there's a convenience on everyone sharing big centralized hosts. There's a lot of convenience of "there is mostly only one remote that everyone shares and it has a high uptime SLA and a ton of extra collaboration features in one place". There were certainly a lot of centralized version control systems before the DVCS was invented, and beyond convenience also a lot of familiarity that such centralizing operations benefited from.

It's interesting to me that in your last paragraph you think the solution is to make git a more "proper" distributed system, but one of the features you find too complex and don't like exists so much because it was defined and built as a distributed system and just isn't as convenient when working with centralized providers. git repos support multiple remotes because it was built to be distributed, git repos require to fetch remotes explicitly because it was built to be fault-tolerant in a distributed system and remotes may have very different SLAs from each other; losing access to one remote host shouldn't stop you from fetching updates from a different one. The DX there was built for a distributed system. It is mostly where we see everything revolving around some super special "origin" remote that the DX feels overly-complicated and maybe missing better "defaults". It is mostly on the internet where running a simple CLI command to spin up a quick code server on a random port on a random machine with an accessible IP address is increasingly hard that it also becomes harder to imagine why people ever needed remotes beyond that special sauce "origin" idea.


My mom sent me a photo of me when I was about 8 years old, playing on the home computer. It was very expensive to have a computer in Brazil at that age, and my parents used it for their work. I used it for games. I grew up using computers. Just getting the games to run, was something that needed knowledge back then. That lead me to at my teenage years, to try out programming, cause I wanted to make some changes to the open source version of a mmorpg my brother and I played. That lead me to choosing computer science. That lead me to being a FAANG engineer. I had a leg up agaisnt every single one of my peers during all my teenage and university years. When my peers were learning to use a computer, I was already programming. When they were learning to program, I was already good at it. You say that computer for young kids have no value? Useless? That computer usage was the single most valuable thing that has happened to me in all of my life!

Every now and again I see someone on this site saying things along the lines of "it's a shame that programmers in general are so unwilling to pay for quality tools", and I feel myself tempted to agree.

Then something like this comes along and it becomes clear why it's so important that everything we rely on is, at the least, free from "I changed the deal" events.


NTFS was not an innovative file system. It has just used several innovations introduced earlier by HPFS.

The High-Performance File System, a.k.a. HPFS, has been included in OS/2 1.2, which was launched in November 1989.

HPFS was a very innovative file system (its chief architect was Gordon Letwin). It has introduced the Extended File Attributes. It had directories implemented with B+ trees, like IBM VSAM (introduced at some time between 1973 and 1978). It had long file names and Access-Control Lists, like the Multics FS (1965). It used cylinder groups like the Berkeley FFS (1983). It used file extents, like the SGI EFS (1988).

Among the UNIX file systems, the SGI XFS (1993, almost simultaneous with NTFS) has been the first which has added all the innovations introduced by HPFS.


I actually wrote a long article on this [1]—and had a chance to interview some of the team that built the original version of VB that was sold to Microsoft. (Alan Cooper and Michael Geary; Michael actually frequents HN pretty regularly!)

My opinion is that it was a confluence of a few factors:

- Microsoft was very worried about the threat of Java/Sun, and rotated hard into .NET and the common language runtime as a response.

- The most vocal, but minority of VB users wanted more advanced functionality and a more powerful/expressive language (as is often the case). Couple with the shift to .NET, Microsoft listened to them: VB got a full rewrite into an object-oriented language and the IDE moved further away from the VB6 visual building paradigm. That left the silent majority high and dry.

- The web emerged. Working with the Win32 API was suddenly less relevant, and younger devs adopted PHP en masse, rather than adopting VB. (And existing VB6 devs upset about the change also migrated over when they could build for the web instead of Windows) Unforced error on Microsoft's part, since IE had 96% browser marketshare in 2001.

[1] https://retool.com/visual-basic/


> C programmers could read java code

This isn't some abstract "readability", though, this is "how close are your langauges in the evolutionary tree". Where ALGOL is the proto-indo-european of quite a lot of langauges, but not all.


>the average user doesn't give a shit about backwards compatibility, they care about being able to create slideshows and send emails.

They may not care about backwards compatibility as an abstract concept, but they would like their computer continue functioning the same way today as it did yesterday.

Although MS is getting somewhat bad at that from a general OS perspective with their updates.


Haven't tried removing nano ever but hate it with a passion for one simple reason.

If none of the EDITOR/VISUAL/etc. variables are set when you invoke 'visudo' a f*king nano appears.

Like god dammit it i assume vi in that name comes from vi as even the manpage has 'vi' among 'see also'. This is akin to launching firefox and some random application popping up - its the ultimate troll and i salute your graybeards for solving the problem at the core!


That article shows the machines, but gives no sense of how they are used together.

Here's a simple billing cycle. Inputs are:

- Purchases, with a description, customer number and amount. These were created manually, by a keypunch operator. Cards would also have date and card type, copied from card to card while punching.

- Payments, with a customer number and amount. Again, with date and card type, copied from card to card.

- Customer addresses, usually a 3-card set, with name and address, customer number, and card type. These are semi-permanent, used month after month, updated by hand.

- A previous balance card for each customer, with customer number, amount, date, and card type.

So, at the end of a billing period, the first step is to sort all the purchase cards by ascending customer number. That's what the 082 sorter is for. They should have been accumulated in date order, although if you have to sort by date, you can. Sorting is labor-intensive, because all the sorter does is a 1-digit radix sort. So if you have a 5-digit customer number, you have to put all the cards through five times. You don't want to have an unnecessarily long customer number. There's a lot of manual card handling here, but it's mindless.

Payment cards get the same sorting treatment, by customer number.

Previous balance cards and address cards should already be in order by customer number, and that will be checked.

Now the 085 collator is used. First, the address cards and previous balance cards go into the two hoppers. The collator can read selectively from either hopper. The only operation the collator can do is compare numbers, less, equal, or greater. But that's enough. The collator combines the address deck and the previous balance deck, so that the deck, for each account number is now

    Address 1, address 2, address 3, previous balance.
Any previous balance card without an address or an address without a previous balance card is caught at this time. Those cards may go into a different output stacker, or the collator may be stopped for a manual check, depending on how the plugboard was wired.

Next the another collator pass - the deck from the previous step vs. the payment cards. Now the deck is

    Address 1, address 2, address 3, previous balance, payments.
Now the big collator pass, where the purchase cards are merged in.

    Address 1, address 2, address 3, previous balance, payments, purchases.
The collator has checked that everything is in the right sequence, with all cards in a block having the same customer number and successive blocks having ascending customer numbers. That's the final deck, and it now goes to the 403 tabulator.

The tabulator has one hopper and one stacker, a printer, and a big removable plugboard that's the program. The data processing shop will have a rack of such plugboards for different jobs. There will be custom-printed multi-part carbon forms, and a "carriage tape" loop, which tells the printer how much to advance the paper to get to the next section of the form. The 403 has very little memory, about 40 decimal digits, although you can buy a few extra counters from IBM if you have to.

So now it's time to print the bills. The final deck is loaded, 500 cards at a time or so. The 403 grinds into action. The address cards are read, and they print in the address block on the form, which will be visible in a window envelope when mailed. Totals are cleared. When the tabulator detects the change from an address card type to a previous balance card, the carriage is told to advance to the line indicated by a hole in the carriage tape. This moves the paper to the beginning of the transactions area. The balance forward is printed on the balance forward line, and added to the total. The purchases follow, again being added. The payments follow, and those are subtracted. When the reader sees a new address card, that indicates the end of this customer.

So the tabulator advances to the total line on the form, and prints the total amount due. An invoice has been generated, and will later go into an envelope to be sent to the customer.

Remember that 514 Reproducing Punch? That's been cabled to the 403 tabulator via a huge cable and connector, for Summary Punch mode. When the tabulator prints the total amount due, it sends the total and customer number to the 514, which punches a previous balance card with the new balance, to be used next month.

You get the idea. One cute feature is that you can program the 403 to detect a hole punched in the carriage tape which indicates the form is full. Then it can print a per-page total, increment the page number, start a new form, repeat the customer number, and continue printing transactions. Yes, they thought of that problem.


> Conservatism has always been opposed to democracy

Conservatism distilled to its essence can probably be summarized as Chesterton's fence: to not make rash moves changing the status quo without fully understanding both the reason the status quo is the way it is, and the nth-degree aftereffects of the change.

So I grant you, in the beginning of democracy, almost by definition conservatism was against it, when the status quo was a feudal monarchy or whatever; but there's been a few hundred intervening years, and I don't think it's fair to make the argument that "conservatism" is still, and will always be, opposed to "democracy".


IPv6's biggest problem remains not that it's badly designed (at least not nowadays, there were problems but they were solved ten years ago) but that millions of network engineers never bothered to look deeper into IPv6 than "I don't get it, this feels off".

You can't make a backwards compatible "IPv4 with more bits" like people dream of. L2 routers and middleboxes would still need to be replaced, software would still need to be rewritten, nothing would be different. IPv4 changed how private networks worked because its first attempt at private networks failed.

People are stuck with the IPv4 mindset through a combination of lacking education (who even taught IPv6 when our current sysadmins were in college?) or assuming IPv4 is normal and well thought out. There are free guides, books, and playgrounds out there if you want to learn IPv6, so the education problem is one you can solve yourself. Realizing the flawed nature of IPv4 is harder.

I've come from IPv4 networking, but learning IPv6 later made me realize how silly old networks really are. DHCP is a hack to solve a design failure in IPv4 and SLAAC is a much better solution. Companies have started relying on awful hacks originating from when companies decided to staple features to a side effects of a generic address distribution protocol. ARP feels more like a placeholder that should've been included a layer lower or higher in the network graph, put in its own little place to solve the theoretical "what if we don't run IP over our switch" problem that stopped being relevant decades ago.

As annoying as it may be, we live in an age where the OSI model with seven layers of networking protocols don't exist. Token ring is dead, SCTP died in the womb, Ethernet II exist purely in theory. The world now runs on HTTPS on top of UDP or TCP on top of IPv6/4, on top of some kind of wire that carries ethernet.

Ethernet now exists to support IP and vice versa in 99% of all use cases. TCP and UDP exist to serve HTTPS, or some legacy protocol that will be rewritten into HTTPS in the next ten years. WiFi and high-speed data networks came in as a whole new networking system and have turned out to be "what if ethernet, but wireless" with some control logic to make the wireless antennae talk. The OSI model and all the expansion and flexibility it provided simply died over a decade ago. Anything on top of the data link exists purely to support Ethernet + IP + HTTPS.

The migration path to IPv6 is now blocked by excuses. People pretended to care about servers not supporting IPv6 as the reason not to use but, but three or four different ways of providing backwards compatibility to all IPv4 clients were thought up and nobody actually asked for any of them. People complained that their data center provider didn't support IPv6 but now that enabling IPv6 is just a single click in a web UI they don't enable it anyway. People cared about the IPv6 privacy risks but never let go of that concept even after rfc4941 fixed that oversight. Companies like Microsoft and Github, too incompetent to set up a network that their dollar store competitors have supported for years now, have become something to point at and go "see? we need those!" as if NAT64/DNS64/464XLAT/SIIT/whatever haven't been providing IPv4 compatibility for years now.

"I don't know enough about it" and "I don't like it" are perfectly good excuses not to enable IPv6 in your home network, but they're not design flaws or protocol problems. If you're willing to accept the packet maiming we have nicknamed "NAT" or even "CG-NAT", you should feel refreshed at the sight of the plain and simple protocols IPv6 provides you with.

The way people talk about IPv6 now reminds me of the way people talked about HTTPS back when Let's Encrypt started gaining popularity, and the way people dealt with systemd reinventing a better Linux management system. Grumpy people, clinging to what they know, delaying unavoidable change until the very last moment. You can be like the Dracut people running ipromiseiwillneverrunssl.com if you want, but it's a losing battle.


> an OS designed after the idea of "the network is your environment, not the computer" where it should make little difference if the resources you are accessing is on your computer or not. [..] For some reason (maybe related to costs and licensing), people never adopted the idea...

I personally think it's a pipe dream, and the way history worked out seems to confirm that.

Truly transparent networking never really worked out anywhere. The better RPC solutions in use, for example, make it very obvious that an RPC is an RPC, instead of looking like a local function call. In file transfer, it has been common to e.g. replace NFS with something HTTP based for quite some time. "Cloud" file systems seem to operate at a much higher layer, preferring external syncing and special high level APIs with only some actual OS support sprinkled in.

For Operating Systems, a lot of Mach breaks down when you want to do things fast and securely.

It's a testament to APIs better being shaped by locality. Roughly and informally spoken, the closer you are to the actual CPU, the more "lightweight" and "direct" your API can and should be. The further you move away, the more resilient you need to become, and the more control and insight you need to give the caller about the actual transport.

To pick two very real extremes as an example: When writing to a register on a local bus, you often just assume it will work. If it doesn't, a panic is sensible (sometimes even worse, like a lockup leading to a watchdog timeout). When trying to invoke a function on another server, obviously that's not acceptable. Then you want to be able to timeout, retry, inspect transport failures, fall back to other servers, configure endpoints, be picky and explicit about payload (and result), and so on and so forth.

The immense cruft necessary for the latter case becomes unnecessary and expensive the more local you are. It affects security, and can even pose chicken and egg problems.



Im posting this reply from my mainstream smartphone: a pocket supercomputer with a 4K 60Hz HDR video camera. I’m chock full of designer vaccines and can watch a catalog of almost all the movies ever made on demand tonight. I listened to an podcast in my EV on the way to work. And SpaceX probably launched 60 satellites this week on the 14th flight of a rocket that flew home afterwards.

Things have been moving along.

edit: comments are quibbling about the fraction of all movies ever I can see on a paid streaming service. This is like complaining about that the meals and elbow room are not wonderful on a $400 flight from LA to London. It's a goddam miracle and you're still grumbling. Please tell me now how your $12/mo Spotify account doesn't have the June 1972 Grateful Dead New Jersey show your mom was at, and is thus near worthless. I spent $12 in 1980s money on single album in my youth.

Let me amend my grossly hyperbolic statement and say I could stream on demand more movies than I could ever watch even if I did nothing else the rest of my life, including many but not all of the good ones. Now the statement is strictly true, but did this make my contribution better?


It's very much a "glass half full" perspective, which gets tiresome from somebody like Bjarne who is very resistant to the idea that he's made mistakes. It excuses lots of bad mistakes because hey, if people really felt that way they wouldn't use C++ right? But of course increasingly they aren't using C++ because it sucks, which is why they were complaining.

Unlike C++, Git has put considerable work into responding to complaints, understanding that while yes, they wouldn't complain if they didn't use it, they also wouldn't complain if there weren't so many things to complain about. Lots of things we use every day don't get many complaints because there's nothing much wrong with them. Even the brutal Git command line is less hostile today than it was originally.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You