For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more cesarb's commentsregister

> you just have 9 bits per byte rather than 8 bits per byte physically on the module. More chips but not different chips.

For those who aren't well versed in the construction of memory modules: take a look at your DDR4 memory module, you'll see 8 identical chips per side if it's a non-ECC module, and 9 identical chips per side if it's an ECC module. That's because, for every byte, each bit is stored in a separate chip; the address and command buses are connected in parallel to all of them, while each chip gets a separate data line on the memory bus. For non-ECC memory modules, the data line which would be used for the parity/ECC bit is simply not connected, while on ECC memory modules, it's connected to the 9th chip.

(For DDR5, things are a bit different, since each memory module is split in two halves, with each half having 4 or 5 chips per side, but the principle is the same.)


> There's a weird fetishization of long uptimes. I suspect some of this dates from the bad old days when Windows would outright crash after 50 days of uptime.

My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).

> In the modern era, a lightly (or at least stably) loaded system lasting for hundreds or even thousands of days without crashing or needing a reboot should be a baseline unremarkable expectation -- but that implies that you don't need security updates, which means the system needs to not be exposed to the internet.

Or that the part of the system which needs the security updates not be exposed to the Internet. Other than the TCP/IP stack, most of the kernel is not directly accessible from outside the system.

> On the other hand, every time you do a software update you put the system in a weird spot that is potentially subtly different from where it would be on a fresh reboot, unless you restart all of userspace (at which point you might as well just reboot).

You don't need a software update for that. Normal use of the system is enough to make it gradually diverge from its "clean" after-boot state. For instance, if you empty /tmp on boot, any temporary file is already a subtle difference from how it would be on a fresh reboot.

Personally, I consider having to reboot due to a security fix, or even a stability fix, to be a failure. It means that, while the system didn't fail (crash or be compromised), it was vulnerable to failure (crashing or being compromised). We should aim to do better than that.


> My recollection is that, usually, it crashed more often than that. The 50 days thing was IIRC only the time for it to be guaranteed to crash (due to some counter overflowing).

I had forgotten about this issue (never got a Windows 9x survive more than a few days without crashing), and apparently it was a 32-bit millisecond counter that would overflow after 49.7 days:

https://www.cnet.com/culture/windows-may-crash-after-49-7-da...


> It really is amazing how much success Linux has achieved given its relatively haphazard nature.

That haphazard nature is probably part of the reason for its success, since it allowed for many alternative ways of doing things being experimented in parallel.


Many new devices use a 2.5mm audio jack instead of the 3.5mm audio jack.


Yes, but that doesn’t obsolete the 3.5mm jack or the 1/4” jack. It’s just a different form factor of the same thing.


> but I believe that success Linux has had is because of copyleft

No, the success Linux has had is because it ran on the machines people had at home, and was very easy to try out.

An instructive example would be my own path into Linux: I started with DJGPP, but got annoyed because it couldn't multi-task (if you started a compilation within an IDE like Emacs, you had to wait until it finished before you could interact with the IDE again). So I wanted a real Unix, or something close enough to it.

The best option I found was Slackware. Back then, it could install directly into the MS-DOS partition (within the C:\LINUX directory, through the magic of the UMSDOS filesystem), and boot directly from MS-DOS (through the LOADLIN bootloader). That is: like DJGPP, it could be treated like a normal MS-DOS program (with the only caveat being that you had to reboot to get back to MS-DOS). No need to dedicate a partition to it. No need to take over the MBR or bootloader. It even worked when the disk used Ontrack Disk Manager (for those too young to have heard of it, older BIOS didn't understand large disks, so newer HDDs came bundled with software like that to workaround the BIOS limitations; Linux transparently understood the special partition scheme used by Ontrack).

It worked with all the hardware I had, and worked better than MS-DOS; after a while, I noticed I was spending all my time booted into Linux, and only then I dedicated a whole partition to it (and later, the whole disk). Of course, since by then I had already gotten used to Linux, I stayed in the Linux world.

What I've read later (somewhere in a couple of HN comments) was that, beyond not having all these cool tricks (UMSDOS, LOADLIN, support for Ontrack partitions), FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported, and even if it were, I'd have to dedicate a whole disk (or, at least, a whole partition) to it, and it would also take over the boot process (in a way which probably would be incompatible with Ontrack).


> FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported

Copy / paste of my comment from last year about FreeBSD

I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.

The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.

https://en.wikipedia.org/wiki/CMD640

The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.

The sound card was another issue.

I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"

When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.


I don’t disagree with what you say. But why did Linux work on all that hardware? I assert that if you trace that line of thinking to its conclusion, the answer is the GPL.

Many people and organizations adapted BSD to run on their hardware, but they had no obligation to upstream those drivers. Linux mandated upstreaming (if you wanted to distribute drivers to users).


GPL does not mandate upstreaming your drivers.


It mandates making source available for upstreaming, if you are distributing.


That's actually true, if they wanted to distribute a linux compatible driver they had to make it available for anyone to upstream it in the linux kernel.

Probably GPL was indeed a factor that made device makers and hackers to create open source drivers for linux. I am not convinced that it was a major one.


I'd say with modern hardware, like the xe Intel iGPUs on 11th gen Intel and up got driver attention quickly. Some things like realtek 2.5gb NICs took a little while to integrate but I think realtek offered kernel modules. I remember NIC compatibility was sparse when I started playing with it around 1999-2000. What trips me up is command flags on gnu vs freebsd utils, ask me about the time I DOSed the Colo from the jump machine using the wrong packet argument interval.


> > But here we are with linux abandoning things like 'a.out'.

> I've completely missed this and would like to know more, may I be so bold as to request a link?

"A way out for a.out" https://lwn.net/Articles/888741/

"Linux 6.1 Finishes Gutting Out The Old a.out Code" https://www.phoronix.com/news/Linux-6.1-Gutting-Out-a.out (with links to two earlier articles)


Thanks!


> One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.

> Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.

A third way, which I believe is what most users actually want, is a "controller-centric" view, with the caveat that most "removable media" we have nowadays has its own built-in controller. The left hand floppy is drive A no matter what's in it, the top CD-ROM drive is drive D no matter what's in it, but the removable Seagate Expansion USB drive containing all your porn is drive X no matter which USB port you plugged it in, because the controller resides together with the media in the same portable plastic enclosure. That's also the case for SCSI, SATA, or even old-school IDE HDDs; you'd have to go back to pre-IDE drives to find one where the controller is separate from the media. With tape, CD/DVD/BD, and floppy, the controller is always separate from the media.


> Only if you have an old-style kernel cmdline or fstab that references /dev/sd* instead of using the UUID=xyz or /dev/disk/by-id/xyz syntax.

Fixed that for you. It used to be normal to use the device path (/dev/hd* or /dev/sd*) to reference the filesystem partitions. Using the UUID or the by-id symlink instead is a novelty, introduced precisely to fix these device enumeration order issues.


Yes... things were certainly broken in the distant past


> A was the first floppy drive, B the (typically absent) second floppy drive

As another commenter mentioned, when you didn't have a second floppy drive, A: and B: mapped to two floppy disks in the same floppy drive, with DOS pausing and asking you to insert the other floppy disk when necessary. Which explains why, even on single-floppy computers, the hard disk was at C: and not B: (and since so much software ended up expecting it, the convention continued even on computers without any floppy disk drive).


> Also, Windows had the ridiculous default of immediately running things when a user put in a CD or USB stick - that behaviour led to many infections and is obviously a stupid default option.

Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.

It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted. Once CD-R writers became popular, and anyone could and did write their own data CDs, these assumptions no longer held.

> I'm not even going to mention the old Windows design of everyone running with admin privileges on their desktop.

That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.


> Playing devil's advocate: absent the obvious security issues, it's a brilliant default option from an user experience point of view, especially if the user is not well-versed in the subtleties of filesystem management. Put the CD into the tray, close the tray, and the software magically starts, no need to go through the file manager and double-click on an obscurely named file.

It's a stupid default, though. One way round the issue is to present the user with the option to either just open a disc or to run the installer and allow them to change the default if they prefer the less secure option.

> It made more sense back when most software was distributed as pressed CD-ROMs, and the publisher of the software (which you bought shrink-wrapped at a physical store) could be assumed to be trusted

This allowed Sony BMG to infect so many computers with their rootkit (https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootk...).

> That design makes sense for a single-user computer where the user is the owner of the computer, and all software on it is assumed to be trusted. Even today, many Linux distributions add the first (and often only) user to a sudoers group by default.

A sudoers group is different though as it highlights the difference between what files they are expected to change (i.e. that they own) and which ones require elevated permissions (e.g. installing system software). Earlier versions of Windows did not have that distinction which was a huge security issue.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You