For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more cesarb's favoritesregister

I'm glad the pendulum is swinging back with this one. With UI paradigms, we seem to have this tendency to throw out the baby with the bathwater, or be so intrigued with the possible new benefits we can get (buttons can change according to context!) that we forget what current benefits we would give up to get them (learnability and muscle-memory because the button always does the same thing, being able to feel your way to a button without looking at it)

It reminds me of what happened with the flat UI/anti-skeuomorphism wave a bit over a decade ago. It seemed like someone got so incensed by the faux leather in the iPhone's Find My Friends app (supposedly made to look like it had the same stitching as the leather upholstery in Steve Jobs' private jet) that they went on a crusade against anything "needlessly physical looking" in UI. We got the Metro design language from Microsoft as the fullest expression of it, with Apple somewhat following suit in iOS (but later walking back some things too) and later Google's Material Design walking it back a bit further (drop shadows making a big comeback).

But for a while there, it was genuinely hard to tell which bit of text was a label and which was a button, because it was all just bits of black or monocolor text floating on a flat white background. It's like whoever came up with the flat UI fad didn't realize how much hierarchy and structure was being conveyed by the lines, shadows and gradients that had suddenly gone out of vogue. All of a sudden we needed a ton of whitespace between elements to understand which worked together and which were unrelated. Which is ironic, because the whole thing started as a crusade against designers putting their own desire for artistic expression above their users' needs by wasting UI space on showing off their artistic skill with useless ornaments, but it led to designers putting their own philosophical purity above their users' needs, by wasting UI space on unnecessary whitespace and forcing low information density on everyone.



There's no excuse for that any more, especially since the invention of the ideal diode.[1]

[1] https://www.ti.com/lit/an/slvae57b/slvae57b.pdf


The reason was because it makes subroutine return and stack frame cleanup simpler.

You know this, but background for anyone else:

ARM's subroutine calling convention places the return address in a register, LR (which is itself a general purpose register, numbered R14). To save memory cycles - ARM1 was designed to take advantage of page mode DRAM - the processor features store-multiple and load-multiple instructions, which have a 16-bit bitfield to indicate which registers to store or load, and can be set to increment or decrement before or after each register is stored or loaded.

The easy way to set up a stack frame (the way mandated by many calling conventions that need to unwind the stack) is to use the Store Multiple, Decrement Before instruction, STMDB. Say you need to preserve R8, R9, R10:

STMDB R8-R10, LR

At the end of the function you can clean up the stack and return in a single instruction, Load Multiple with Increment After:

LDMIA R8-R10, PC

This seemed like a good decision to a team producing their first ever processor, on a minimal budget, needing to fit into 25,000 transistors and to keep the thermal design power cool enough to use a plastic package, because a ceramic package would have blown their budget.

Branch prediction wasn't a consideration as it didn't have branch prediction, and register pressure wasn't likely a consideration for a team going from the 3-register 6502, where the registers are far from orthogonal.

Also, it doesn't waste instruction space: you already need 4 bits to encode 14 registers, and it means that you don't need a 'branch indirect' instruction (you just do MOV PC,Rn) nor 'return' (MOV PC,LR if there's no stack frame to restore).

There is a branch instruction, but only so that it can accommodate a 24-bit immediate (implicitly left-shifted by 2 bits so that it actually addresses a 26-bit range, which was enough for the original 26-bit address space). The MOV immediate instruction can only manage up to 12 bits (14 if doing a shift-left with the barrel shifter), so I can see why Branch was included.

Indeed, mentioning the original 26-bit address space: this was because the processor status flags and mode bits were also available to read or write through R15, along with the program counter. A return (e.g. MOV PC,LR) has an additional bit indicating whether to restore the flags and processor state, indicated by an S suffix. If you were returning from an interrupt it was necessary to write "MOVS PC, LR" to ensure that the processor mode and flags were restored.

# It was acceptable in the 80s', It was acceptable at the time... #

Ken Shirriff has a great article "Reverse engineering the ARM1" at https://www.righto.com/2015/12/reverse-engineering-arm1-ance....

Getting back to multipliers:

ARM1 didn't have a multiply instruction at all, but experimenting with the ARM Evaluation System (an expansion for the BBC Micro) revealed that multiplying in software was just too slow.

ARM2 added the multiply and multiply-accumulate instructions to the instruction set. The implementation just used the Booth recoding, performing the additions through the ALU, and took up to 16 cycles to execute. In other words it only performed one Booth chunk per clock cycle, with early exit if there was no more work to do. And as in your article, it used the carry flag as an additional bit.

I suspect the documentation says 'the carry is unreliable' because the carry behaviour could be different between the ARM2 implementation and ARM7TDMI, when given the same operands. Or indeed between implementations of ARMv4, because the fast multiplier was an optional component if I recall correctly. The 'M' in ARM7TDMI indicates the presence of the fast multiplier.


MQTT is being used a lot more in recent years inside of factories for sharing data between machines. Historically it's been used in Oil & Gas for SCADA (getting data from remote well sites).

10+ years ago we added it to Kepware (OPC server) and streamed tag values to "the cloud". I was at a conference giving a presentation on it when Arlen Nipper, one of the creators of MQTT, came up after the presentation and said I did a "decent job". It was humbling :). Fast forward to today, and we have a new company (HighByte) modeling factory data at the edge and sending it via MQTT, SparkplugB (protocol over MQTT), direct to S3, Azure Blob, etc, etc.

All that to say, MQTT is a big driver in Industry 4.0, and it's cool to see it so heavily used all these years later.


Agreed on hoping this is the inflection point, but only partial agreement that it's about adblock. For sure Google wants adblock to die, but I think it goes even deeper than that.

I think it's part of a much bigger trend in tech in general but also in Google: Removing user control. When you look at the "security" things they are doing, many of them have a common philosophy underpinning them that the user (aka device owner) is a security threat and must be protected against. Web integrity, Manifest v3, various DoH/DoT, bootloader locking, device integrity which conveniently makes root difficult/impossible, and more.

To all the engineers working on this stuff, I hope you're happy that your work is essentially destroying the world that you and I grew up in. The next generation won't have the wonderful and fertile computing environment that we enjoyed, and it's (partly) your fault.


> So is open source actually a core item as the author asserts or just a nice to have?

I think there's a difference in terms here. Having a computer that lasts for 50 years doesn't necessarily mean that you want a computer that is forever unchanging, frozen in amber. You should be able to upgrade a long-term computer, if you want to (including the software); the point is just that you don't have to upgrade.

For the "frozen in amber" use case, sure, you could just pirate the proprietary stuff and hope to fly under the radar. But for the living use case, you need open source, even if that's based on some decompiled proprietary code.


The OSI stack was also designed using the packet-switching approach. Rob Graham's "OSI Deprogrammer" is a book-length article about how TCP/IP beat OSI, and how the OSI model is entirely worthless: https://docs.google.com/document/d/1iL0fYmMmariFoSvLd9U5nPVH...

I'm not sure he's right, but I do think his point of view is important to understand.


Instead we always find a USB type mini B when needing a micro B, a micro B when needing a type C, and a type C when needing an extended micro B. If you reveal a spare extended micro B whilst rummaging around then it will in additional transpire that the next cable needed will be a mini B, irrespective of any prior expectation you may have held about the device in question.

A randomly occurring old-school full-size type B may be encountered during any cable search, approximately 1% of the time, usually at the same moment your printer jams.

What I really don't understand, however, is why I keep finding DB13W3s in my closet


> What happens when it's no longer fun, but 150,000 people depend on it for their job?

Then someone else can take over, or not, depending on how important it really is to those 150000.

From the perspective of a business, using open source is strictly safer than closed source. With closed source, if the entity owning the code goes bankrupt, increases licensing fees substantially, just axes the project (incidentally what Google is also known for) or looses interest in some other way, you are out of luck. Go find something similar and hope there is a migration path. With open source, the equivalent is the maintainer walking away because they are bored, and you have the code right there to take over, or pay someone else to do so.

> I question the validity of open-sourcing anything for fun unless you design the licence, and more, to enable you to walk away and/or get bored.

Every open source license is already designed so that you can just walk away for whatever reason, e.g. when you are bored.


Fun fact: US government agencies are required to progressively migrate to IPv6-only environment, beginning with completing at least one pilot of IPv6-only system by FY 2021: https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-0.... Apparently, National Archives decided to pick archived Clinton White House website for this: https://clintonwhitehouse1.archives.gov/

Haven't lived back then I'm not sure today's CD is better. Too often web breaks features people rely on. Worse they just do it randomly with no notice and no ability to have a transition time while you update. I miss the stability of knowing things will work like they did for a long time and when they break it is at a time I choose to apply the upgrade. (it didn't always work out this way, random bugs were a thing and still are)

Quality control was also a lot more important - you couldn't ship something with many bugs as customers would be forced to look at competitors if they hit your bug.


Sure, it's misleading, needs a lot of "interpretation" doing a non-trivial amount of the lifting to make it map to anything in the real world, mismatches things that happen in the real world while leaving no room for other things that happen in the real world a lot, and will lead anyone who tries to use it to understand the real world deeply astray, but it isn't always wrong about absolutely everything so it has some non-zero "utility".

Fine. It's not wrong about absolutely everything all the time. It isn't bereft of all truth. It's just something that is of net negative value. I see no value in insisting on trying to "rescue" a net-negative value model of the world.

I suppose you could say ultimately I agree with you though. The OSI model isn't useless. It's worse than useless. You're better off trying to understand networking from basic first principles than through the lens it provides.


The paper asks "why does this feature exist?" - probably they haven't gone far enough back in history (note I've worked on x86 clones I understand this stuff in far too great a detail)

Originally on x86 systems memory was in VERY short supply - SMM mode memory was the DRAM that the VGA window in low memory (0xa0000) overlaid - normal code couldn't access it because the video card claimed memory accesses to that range of addresses - so the north bridge when the CPU was in SMM mode switched data and instruction accesses to that range to go to DRAM rather than the VGA card .... that's great except remember that SMM mode was used for special setup stuff for laptops .... sometimes they need to be able to display on the screen .... that's what this special mode was originally for: so that SMM mode code can display on the screen (it's also likely why SMM mode graphics were so primitive, you're switching in and out of this mode for every pixel you write)


FOSS is an activity taken up by people who want a particular solution to a problem. When the tech and IT industry were vastly smaller than it is today, there was actually a larger number of high-quality FOSS projects. Today there are more FOSS projects than ever by number, but a good chunk are what I would call "throwaway" projects. Another large amount are projects that just exist to fill gaps in new ecosystems (a new language/framework/platform/etc appears, and now they need a lot of projects to provide libraries to fill needs). And there's lots of immature engineers who are growing up in a culture without documentation, and too quick to reference snippets than learn a tool completely. Tech itself is changing, and that will (and does) affect open source.

But the existence of open source is self-perpetuating at this point. There isn't really a bubble to burst. FOSS will continue to be here as long as the computer allows anyone to create and share a work for free that can be reused and there is a need for. It is not sustainable, because it does not need sustaining. Anybody with an internet connection can decide for themselves to create open source. As long as we provide an unimpeded means to share their work (newsgroups, mailing lists, FTP, mirrors, etc), there will be open source. And as long as somebody has leisure time, and there are nerds to be interested in programming during their leisure time, some will spend it on open source. It's like "sports": people will still play sports regardless of a business motive.

I think people are shocked by the whole "business source" license thing, but I'm not. Open Source was never a business model, no matter how much some of its proponents wanted it to be. It's taken a while for people to learn that lesson, but it's starting to sink in. The fact that fewer startups will be open source now isn't going to change the open source community at all. The community will be here regardless of what any business (or individual government) wants.

As far as "our reliance on open source", that's just a happy accident. It's free of charge and free of use, so business uses it. If for some reason it stopped being free of charge or free of use, then business would just pay for proprietary software like it used to. I don't see this happening anytime soon, though, unless somebody passes a law banning Copyleft.


> Also, good old-fashioned pen and paper, when used, is surrounded by various systems and various equipment

I don't know what the US does, but this is how it works in Germany: Around half-ish of the polling station staff are clerks of the local administration (normal office workers of the city hall, who almost always serve their whole life - they are not re-appointed by the current ruling party), half (or more) are citizens. If not enough citizens sign up voluntarily, random citizens are drafted.

The equipment is: A list of all eligible citizens, who can vote (no registration is required), a ballot box with a very flimsy padlock, for which the polling station staff has the key, mobile privacy screens for the voters, pens and the actual ballots.

If a citizen wants to vote, they show their national ID (something which the US does not have, I know, but that's not the fault of the paper voting process) and get a ballot. They make their choice behind the privacy screen and put the ballot in the ballot box.

After the polling station closes, the ballot box is shaken around a bit and anyone[1] can come to look / supervise the polling station staff as they count the votes. The number of votes must be round about equal to the number of voters. The result if given to the city hall via phone, the ballots get put into the ballot box and can be recounted later, if necessary. City hall puts all results on their website, so the polling stations can verify.

If a ballot has more than the allowed number of votes or something written on it, the polling station staff holds a quick vote, majority decides.

That's all, the whole process. No ink, no complex seals (the key for the ballot box is in a box with the blank ballots, it's only there to prevent accidental opening of the ballot box), no timed process (except "voting until 18 o'clock"), no politically motivated selection of polling station staff or observers.

Would you really say that this is more complicated than electronic voting, including understanding the algorithms? Especially for someone with no CS background.

And it works - will you sometimes have one ballot more than voters? Yeah, sure, because someone may forgot to count a voter. But those tiny, human discrepancies IMO don't matter when you have >1000 ballots. The result is correct enough, and based on keeping each other in check, not on technical security measures. Everyone can understand the process, and everyone can be a part of it.

It does not meet the correctness guarantees of (perfect, untamperable) electronic voting, but it's IMO a heck of a lot simpler, just as trustworthy at scale and anonymous.

[1] literally anyone, even non-citizens, no registration required - we even give them coffee if some is still left :D


> just speaking past each other here

no I'm not

if your program has UB it's broken and it doesn't matter if it currently happen to work correct under a specific compiler version, it's also fully your fault

sure there is shared responsibility through the stack, but _one of the most important aspects when you have something like a supply chain is to know who supplies what under which guarantees taking which responsibilities_

and for C/C++ its clearly communicated that it's soly your responsibility to avoid UB (in the same way that for batteries it's the batteries vendors responsibility to produce batteries which can't randomly cough on fire and the firmware vendors responsibility for using the battery driver/chagrin circuit correctly and your OS responsibility so that a randoms program faulting can't affect the firmware etc.)

> be misused and handle them in a reasonable manner

For things provided B2B its in general only the case in context of it involving end user, likely accidents and similar.

Instead it's the responsibility of the supplier to be clear about what can be done with the product and what not and if you do something outside of the spec it's your responsibility to continuously make sure it's safe (or in general ask the supply for clarifying guarantees wrt. your usage).

E.g. if you buy capacitors rate for up to 50C environmental temperature but happen to work for up to 80C then you still can't use them for 80C because there is 0% guarantee that even other capacitors from the same batch will also work for 80C. In the same way compilers are only "rate"(1) to behave as expected for programs without UB.

If you find it unacceptable because it's to easy to end up with accidental UB, then you should do what anyone in a supply chain with a too risky to use component would do:

Replace it with something less risky to use.

There is a reason the ONCD urged developers to stop using C/C++ and similar where viable, because that is pretty much just following standard supply chain management best-practice.

(1: just for the sake of wording. Through there are certified, i.e. ~rated, compilers revisions)


Coders work with what they have. The work itself is shaped by the tools the workers use to produce it. If you give workers inadequate tools, you can't expect high quality.

Java currently doesn't provide any decent (general) solution for the problem of nullability - JSR-305 is a failed spec, Optional is very verbose, doesn't work for many use cases (e.g. isn't Serializable) and funnily enough there's no guarantee the Optional instance is non-null, value types (primitive and the preview support for complex ones) obviously covers only very specific use cases.


What gets me is that much of the OSS/Linux ecosystem consists of thousandas of lashed together piles of code written by independent and only very loosely coordinated groups, much of it code and lashed together by amateurs for free, and it is still more robust than software created by multi-billion dollar corporations.

Perhaps one reason is that OSS system programmers are washing their dirty linen in public; not a matter of "many eyes make bugs shallow", but that "any eyes make bad code embarassing".

Just for example, I'm planning to make one of my commercial projects open source, and I am going to have to do a lot of fixing up before I'm willing to show the source code in public. It's not terrible code, and it works perfectly well, but it's not the sort of code I'd be willing to show to the world in general. Better documentation, TODO and FIXME fixing, checking comments still reflect the code, etc. etc.

But for all my sense of shame for this (perfectly good and working) software, I've seen the insides of several closed-source commercial code bases and seen far, far worse. I would imagine most "enterprise" software is written to a similar standard.


The encodings we use today have a surprisingly deep and complex history. For more, see: "The Evolution of Character Codes, 1874-1968" https://ia800606.us.archive.org/17/items/enf-ascii/ascii.pdf

I was in my 20s during the peak hysteria of post-9/11 and GWOT. I had to cope with the hysteria hyped 24/7 by media and DHS of a constant terror threat to determine if it was real.

The fact that global infra is so flimsy and vulnerable brought me tremendous relief. If the terror threats were real, we would have been experiencing infrastructure attacks daily.

I remember driving through rural California thinking if the terrorist cells were everywhere, they could trivially <attack critical infra that I don't want to be flagged by the FBI for>

I've read a lot of cyber security books like Countdown to Zero Day, Sandworm, Ghost in the Wires and each one brings me relief. Many of our industrial systems have the most flimsy, pathetic , unencrypted & uncredentialed wireless control protocols that are vulnerable to remote attack.

The fact that we rarely see incidents like this, and when they do happen, they are due to gross negligence rather than malice, is a tremendous relief.


https://www.usenix.org/system/files/1311_05-08_mickens.pdf

"Perhaps the worst thing about being a systems person is that other, non-systems people think that they understand the daily tragedies that compose your life. For example, a few weeks ago, I was debugging a new network file system that my research group created. The bug was inside a kernel-mode component, so my machines were crashing in spectacular and vindic- tive ways. After a few days of manually rebooting servers, I had transformed into a shambling, broken man, kind of like a computer scientist version of Saddam Hussein when he was pulled from his bunker, all scraggly beard and dead eyes and florid, nonsensical ramblings about semi-imagined enemies. As I paced the hallways, muttering Nixonian rants about my code, one of my colleagues from the HCI group asked me what my problem was. I described the bug, which involved concur- rent threads and corrupted state and asynchronous message delivery across multiple machines, and my coworker said, “Yeah, that sounds bad. Have you checked the log files for errors?” I said, “Indeed, I would do that if I hadn’t broken every component that a logging system needs to log data. I have a network file system, and I have broken the network, and I have broken the file system, and my machines crash when I make eye contact with them. I HAVE NO TOOLS BECAUSE I’VE DESTROYED MY TOOLS WITH MY TOOLS. My only logging option is to hire monks to transcribe the subjective experience of watching my machines die as I weep tears of blood.”


This is what forums like this one are for. Ordinary news isn't going to have more than a passing mention of the xz hack, or log4j, or meltdown, or heartbleed. Find (or start) a private group chat for technologists you know to share news like this.

I have no idea how they are powering it, but with the speed with which solar and battery prices are falling, and the slowness of getting a new big grid interconnection, I would not be surprised to see new data centers that are primarily powered by their own solar+batteries. Perhaps with a small, fast and cheap grid connection for small bits of backup.

If not this year, definitely in the 2030s.

Edit: for a much smaller scale version of this, here's a titanium plant doing this, instead of a data center. The nice thing about renewables is that they easily scale; if you can do it for 50MW you can do it for 500MW or 5GW with a linear increase in the resources. https://www.canarymedia.com/articles/clean-industry/in-a-fir...


> It would be great to make this feature relevant again but use decimal unicode codes, and have it work in linux too this way [...]

IBus[1] can (apparently) do that!

[1] https://wiki.archlinux.org/title/IBus#Unicode_input


This comment makes me feel like I live on a different planet.

In 1999, I built a completely custom Win32 GUI for a (brandable) chat application - that was the product the company I worked for was selling.

Pure C and C++. It was a 32-bit app, not 64 bits. And we felt bad for it being 250KB single executable (skin included in resource section) and not “150KB or less” as was our initial target. But making it accessible and fully skinnable/themable/l10n/i18n did add quite a bit which we hadn’t realized when thinking 150KB was realistic. But it’s not like we resorted to any crazy tricks; that was just the stripped linker output.

It was still a reasonable download at the dominant dial up internet at the time (28.8Kbps IIRC)

Sure, modern tools add a lot of cruft that you’d have to work hard to reduce, but 40MB. Oh man.


Many of the original C++ decisions come back to how it was supposed to be Typescript for C, which was a reason why it became widely used, and why some warts are the way they are.

Like having C structs magically turn into C++ ones, thus implicit rules like these.

Anyone that cares about C++ evolution should read "Design and Evolution of C++", not only for how it came to be, also for safety approaches over plain C, that Bjarne is stil arguing for to this day on WG21 meetings.


I lived in Tokyo before Google Maps, and the answer was: printed maps. Every business card had a map on the back, every ad for an event had a map, every personal invitation had a hand-drawn map. But yeah, if you were going somewhere as a group, you'd meet at a known point (outside train station etc) and then head over together.

If you were roughing it on your own and only had the address, things got more interesting. Train stations always had detailed maps of major landmarks, so finding those was not an issue. If you were looking for something too small to be covered (say, a restaurant), you'd head to the chōme and then start winnowing down. Police boxes (koban) always had detailed neighborhood maps bolted to a wall nearby listing every single business and family by name, albeit usually in handwritten Japanese only, and you could ask the cops for directions too.

The final boss was the non-linear numbering house scheme though. Some friends and I once spent a fruitless hour searching for the HR Giger bar in Tokyo, which we knew was at X-Y-Z, but only managed to find X-Y-(Z+1) and X-Y-(Z-1).


> Interesting. I didn't know that. Do you do the conversion in your head, like 2 blocks = 160*2 meters, when someone use blocks in a conversation ? Just curious.

I do it explicitly some times, but mainly I do 2 blocks -> 5 minutes of walking.

It's also useful to think in blocks because addresses here (and in many places in the US) go up/down by 100 for each block; so if I'm going from 12XX to 7XX (colloquially "The 1200 block to the 700 block") that's 5 blocks.

> also ... LOOK everyone I found an American who uses metric system as first choice in their post!!!

Meters are better than feet in every way even if you use miles for long distances. There are 5280 feet in a mile, but 1600 meters in a mile[1]. One of these numbers is far easier to deal with.

As an aside, I wonder about using centimeters for height; In the US, if you look at a bell-curve for "self reported height of men" you see a drop off just below 6' (~183cm) and a huge spike at 6', distorting the curve[2]. Do you see similar distortions at 180 or 175cm in countries where the metric system is used? Given that far more men are "almost" 180cm or 175cm (in the US, 175 is average) are there more men "rounding up" on their heights in such places?

Athletes, in particular, are incentivized to round-up, as scouts won't even look at you if you are "too short" and scouts like to use even numbers for cutoffs.

1: Technically 1609, but that makes an eighth of a mile estimate off by only about a meter, and the "mile run" event in US high-school track is actually 1600 meters.

2: The center of the bell-curve is also 2-3cm taller than measured heights of men in the US.


That's a dehumanizing system. Have we lost our way, HN? Are we so immersed in the bleakness of tech, it comes so naturally for us, to propose "hey, let's create surveillance machines to perpetually watch people working, for the rest of their productive lives" and it's something we have to pause and think about?

Let's not build Hell on Earth for whatever reason it momentarily seems to make business sense.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You