Your post reminded me when Yahoo IM updated their chat protocol to an incompatible version with a gradual rollout! Half their eight servers used v1 and half v2. A v1 only client would connect half the time depending on which server the round-robin DNS sent you to. This took me forever to figure out, but the fix was to put the IP address of the four v1 servers in the hosts file. (Until the client updated its support for v2.)
> And, especially what most people call big-endian, which is a bastardized mixed-endian mess of most significant byte is zero, while least significant bit is likewise zero.
In the 1980s at AT&T Bell Labs, I had to program 3B20 computers to process the phone network's data. 3B20s used the weird byte order 1324 (maybe it was 2413) and I had to tweak the network protocols to start packets with a BOM (byte order mark) (as the various switches that sent data didn't define endianess), then swap bytes accordingly.
While I have no personal experience with the 3B2 series, its documentation[1] clearly illustrates the GP's complaint: starting from the most significant binary digit, bit numbers decrease while byte addresses increase.
As for networking, Ethernet is particularly fun: least significant bit first, most significant byte first for multi-byte fields, with a 32-bit CRC calculated for a frame of length k by treating bit n of the frame as the coefficient of the (k - 1 - n)th order term of a (k - 1)th order polynomial, and sending the coefficients of the resulting 31st order polynomial highest-order coefficient first.
I was in charge of the firmware for a modem. I had written the V.42 error correction, and we contracted out the addition of the MNP correction protocol. They used the same CRC.
The Indian (only important because of their cultural emphasis on book learning) subcontractor found my CRC function, decided it didn't quite look like the academic version they were expecting, and added code to swap it around and use it for MNP, thus making it wrong.
When I pointed out it was wrong, they claimed they had tested it. By having one of our modems talk to another one of our modems. Sheesh.
This is an excellent lesson for data transport protocols and file formats.
> I had to tweak the network protocols to start packets with a BOM (byte order mark) (as the various switches that sent data didn't define endianess), then swap bytes accordingly.
(A similar thing happened to me with the Python switch from 2 to 3. Strings all became unicode-encoded, and it's too difficult to add the b sigil in front of every string in a large codebase, so I simply ensured that at the very few places that data was transported to or from files, all the strings were properly converted to what the internal process expected.)
But, as many other commenters have rightly noted, big-endian CPUs are going the way of CPUs with 18 bit bytes that use ones-complement arithmetic, so unless you have a real need to run your program on a dinosaur, you can safely forget about CPU endianness issues.
My fantasy: After the salesman says (for the 4th time), "Sorry, the manager won't approve that price, but if you could add X hundred dollars, I'm sure I can convince them!", I wait until they are through high-fiving each other and then tell the salesman "Sorry, my trust manager didn't approve that price. I'm sure I can convince him if you lower the price by X hundred dollars".
My reality: I use my bank's car-buying service and pay the bank's negotiated price.
Age verification is simple! A Kelvar strap must be attached to the user before the device will power up. A probe in the strap takes a drop of blood from the user and analyzes the protein markers to determine the user's age. (See the Stanford U. study for details.)
Surprised G. Orwell or A. Huxley didn't think of it first.
Not enough, the user may get someone else to wear the strap for them. The only solution is Neuralink(tm) with a built in secure element and DRM to ensure that content is delivered directly from the source to the age verified user’s brain, without any so called “analog hole” through which minors or non-paying users could view the content.
Allowing Unicode characters, then stating best practice is to stick with ASCII, is weird. (Go is not alone in this practice.) Unicode identifiers have a host of issues, such as some characters have no case distinction, some have title-case but not uppercase, some "capitalize" the last letter in a word and not the first (Hebrew has five "final form" letters), etc. Does Go specify the meaning (exported or not) if a letter has no case, or if an identifier starts with a zero-width joiner character? Without a huge list of detailled rules, too much is left to the implementation to decide. I prefer to stick with ASCII for names.
Fun fact: When printing with movable type began, printers would travel with large "type cases" containing the small wood or metal blocks with glyphs on them. The ones the used frequently were kept in the lower half of the case, in easy reach. That's where the terms "lowercase" and "uppercase" come from.
It’s bad to disallow non-ASCII characters when programming for non-English business domains. The domain vocabulary often doesn’t translate well to English, or you need a glossary for someone familiar with the business domain to know which English term is supposed to mean which native business term. It’s just a pain. Conversely, transliterating the native term in ASCII can introduce ambiguities or be awkward or just plain weird for the native speaker.
Of course, Unicode can be abused, but ASCII isn’t completely free of that either. (Maybe it tickles your fancy to name a variable _o0O0o_ or l1lIl1lIl1lIl1l.) If your native script doesn’t have upper and lower case, compromises may have to be made. We have to trust programmers to use good judgement.
Anecdotally I'm programming for non-English business domains in Go and Python and I've literally never seen anyone use native alphabet in identifiers - it's always either poor translations or transliterations.
it is weird, especially for Go with its semantic naming and famously opinionated compiler. it will gladly build code with a variable named 𖤐界ᥱᥲΣ੭, but God forbid it's unused.
No, because the string will be implicitly converted to `true` and `(a && true) == a` (for boolean `a`), so it will only be `false` if the assertion fails. Using || would always evaluate to `true`
> In the modern world there is no plausible scenario where this would compromise a password that wouldn't otherwise also be compromised with equivalent effort.
Not sure about that. I'm no expert but for high risk scenarios one might have to worry about binoculars from the building opposite your window, power line monitoring, and timing attacks. All scenarios where the attacker cannot see your hands/keyboard.
I don't believe you're missing anything. This is just stegenography with a possibly new covert channel, right? Apparently the secret depends on advisaries not noticing the special hardware deployed on each end. Would using spread sprectum techniques would work just as well?
reply