If it helps, the miles-to-kilometers ratio is approximately the golden ratio... (1.609 vs 1.618.)
So, as a level N nerd, you can convert miles to kilometers by rounding to a nearby fibonacci number, and then finding the NEXT fibonacci number (and maybe fudging a bit in the direction of rounding).
Then, as a Level N+1 Nerd, you can realize that the Fibonacci Base exists, in which any integer can be represented as a collection of distinct fibonacci numbers. (for example, 43 = 34 + 8 + 1, or, using a binary string to show which Fibs are involved, 42 = 10010001.) The conversion of miles to kilometers is then just a bit-shift operation.
C isn't a language that lends itself to bootcamp-stye learning.
With Javascript, you can get something on-screen in a few minutes, and even if you make mistakes, you will normally see something. It's a more forgiving environment.
With C, a small error prevents compilation at all, and it's going to be a relatively long time before you're ready to progress past the "printing text to the console" stage.
C is flatly harder to learn, and unless you're the kind of person who likes mental challenge, it's less rewarding than Javascript. It isn't the kind of thing you tackle because you need hirable job skills by the end of the month.
There are still some excellent C tutorials out there (for example, I think Handmade Hero's[0] intro to C is good, and Handmade Hero itself gets you to the "shiny colors on the screen" stage very quickly), but HH has a different mentality than a bootcamp. HH is about learning, exploring, breaking things, and figuring them out on the fly. A bootcamp is about gathering the minimum knowledge necessary to be productive as quickly as possible.
I will counter with the Iiyama prolite x4071uhsu-b1, the highest-contrast (5000:1 typ) LCD I have ever seen with such a competitive sticker price. 500~600$ 4k 40". Up to 75Hz @24bit color or 60Hz @40bit color.
It has zero smart stuff, and comes with an RS232 input for which documentation exists to control _everything_ remotely. I think the latter is due to it's brother being a 24/7 rated digital signage device, which typically implies remote management.
I'm passively looking for a newer model for a potential secondary setup on a different desk/location, but haven't stumbled on anything I'd prefer.
As interesting as the vessel is the noncompliance of the crew in being used as propaganda:
> This treatment turned worse when the North Koreans realized that crewmen were secretly giving them "the finger" in staged propaganda photos.
and
> Eventually the North Koreans threatened to execute his men in front of him, and Bucher relented and agreed to "confess to his and the crew's transgression." Bucher wrote the confession since a "confession" by definition needed to be written by the confessor himself. They verified the meaning of what he wrote, but failed to catch the pun when he said "We paean the DPRK [North Korea]. We paean their great leader Kim Il Sung". (Bucher pronounced "paean" as "pee on.")
I wasn't even familiar with the verb "paean", brilliant of the commander to know that non-native speakers wouldn't catch on to the idiosynchronies of using an uncommon homophone as an insult.
1. USA ousts Saddam Hussein, and is obligated to help Iraq transition to a democratic government (because they invasion was done under the pretext of liberating a country from a mad dictator with weapons of mass destruction, instead of the commonly cited argument of securing resources for the American empire).
2. USA backs a candidate and initially gets a government elected that is favorable to the USA. However, these politicians turn out to be kind of scummy and screw up just about everything.
3. Iran realizes that elections can be influenced (they watched the USA influence the first election), and having lots of paramilitary type forces that are trained in information operations, decides to capitalize. They flood across the porous border to spread propaganda (the current scumbags in government make it so the Iranian propaganda doesn't even have to lie, it just has to point out how much the current guys suck).
4. Iraqi citizens, growing sick of the crappy politicians they elected initially, start listening to the propaganda and elect a government friendly to Iran. This is the first time this has happened in a long time, Iraq has not been allies with Iran for several decades.
5. End result- USA invades, eventually loses influence to Iran. The global hegemony USA got outplayed by a regional hegemony because they backed unethical and incompetent sellouts rather than finding a good candidate.
IBM's AS/400 (and all of its renames) is a 128bit architecture. The huge address space is beneficial for implementing capability security on memory itself, plus using single level store for the whole system (addresses span RAM and secondary storage like disks, NVMe, etc)
Watching people try to teach John Nagle how to do networking is half the reason why I like reading the HN forums :-).
I'm not one to defer to authority too easily, but it's my experience that, when someone with enough experience says something that sounds out of this world, it's a good idea to think about it for a bit.
We had that in the early days of TCP. Then Berkeley broke it.
Originally, an IP address identified a destination host, not an interface card. Didn't matter over what device a packet arrived. The BSD people, though, tied IP addresses to the network interface, so it mattered over which device a packet arrived. This simplified outbound routing in BSD, but broke multipath TCP.
How does the construction industry work in the US? Do many people DIY? Do many people get hands-on with building homes while still hiring experts where needed? Is it common for city people to own farms/land nearby?
It seems it isn’t really that costly to build a nice 1000 sq. ft. place complete with appliances and modern amenities.
The construction industry in India is shitty - carpenters, painters, plumbers, masons, general contractors have little to no professional training and eff up projects often, the tools aren’t modern and renovation work is infrequent because of the poor quality of construction services (and non-existent DIY culture and ecosystem). Also, it’s probably looked down upon in the Indian society if a house owner goes DIY with any construction work (gardening is an exception).
I've done digital signage, controlled servos, used them as cameras, the works.
Right now I have:
- A 5-node Pi 2 cluster running k3s.io (https://github.com/rcarmo/raspi-cluster), and a separate Pi 2 I use as a Docker build box and local Docker registry.
- A Pi 4 as a "lab" desktop computer with an USB oscilloscope and FTDI cables to flash ESP8266 and Arduinos
- A Lakka.tv arcade/MAME box for the kids with a PS3 controller (no room for a proper PiCade, we just use the TV)
- A Pi 3A+ with a mic array for playing around with Google Assistant
- Another one that I carry with me as a “pocket server” to SSH into from my iPad over Bluetooth
- A Pi Zero W taped to the inside of my electricity meter trying to estimate power consumption (we have a spinning disk mechanical meter)
- Another Pi Zero W with an EnviroPi HAT that I use to demo Azure IoT solutions
- An ODROID U2 (Could be a Pi) running HomeKit and Node-Red for home automation, as well as a bastion container (all dockerized).
- A 3B hooked up to my 3D printer running OctoPi
And the list goes on. I have many older Series Bs lying around, and once used one to revive a dead synth whose MIDI keyboard still worked (I set up timidity and a sound font on it and it became the kids' piano).
I also ran a Plex server on a 3B until it became obvious that I needed to think about transcoding (but it worked fine for music).
You can do a _lot_ with Raspberry Pis.
I just hope they also beef up the 3A+ RAM at some point since 512MB RAM is too tight.
I'm also aware of a project from a major automotive supplier to attempt the same thing. From my understanding it's unlikely to succeed because manufacturers view suppliers as commodity producers of components they find boring like brakes, steering systems, sensors, transmissions, safety systems, fuel pumps, etc. Not as anything resembling a true partner. Not to mention that it would require competitors to collaborate closely in the production of a highly complex piece of software.
That said, how many times have we seen this story in other industries?
1 Legacy corporation is warned that an integrated, consumer-friendly software architecture for [multi-billion product line] is needed, and failure to produce one creates an opportunity for an insurgent competitor and/or commodification by adjacent supply chain players.
2 Leadership laughs and ignores mounting evidence of just such a threat emerging for up to a decade.
3 Lo and behold, prophesied competitor finally emerges and finds immediate market success.
4 Legacy company announces that they'll bring a competing solution to market, promising investors that they'll produce a similar quality OS, but across 39 models, uniting 457 separate component suppliers AND the entire post-purchase product support infrastructure. They're starting today and promise to launch in 12 months.
5 Legacy company lights billion dollar bonfire to distract investors while CTO frantically tries to source a robust embedded operating system with consumer-grade interfaces and feature set.
6 Best case, no one who has such an OS will license it. Worst case, Google will.
7 Leadership jumps ship, legacy company craters or slowly slides into irrelevance, and CEO later gives interviews about how absolutely no one could have seen this coming, with a sidebar complaining about software engineering salaries.
Honestly, this whole narrative is becoming a bit boring at this point. VW is at stage 5. The fact that its leadership consists entirely of charlatans is self-evident.
Latin is still perfectly capable of every kind of ambiguity that other natural languages are. For example, I once wrote the sentence
Quisque aliquid habet quod occultet
for a t-shirt.
While the intended meaning is 'everybody has something to hide', in a different context one could imagine that the subject of "occultet" is someone else previously referred to. For example, if we had just been talking about Moxie Marlinspike, we could conceivably read this sentence as 'everybody has something for him [Moxie] to hide'. (Like, all of us users out here have got different things that Moxie can help each of us to protect.)
There's also a famous joke "malo malo malo malo malo" ('I prefer (being) a bad man in a bad situation to (being) an apple in an apple tree'). I'm sure we can proliferate examples of ambiguous Latin to match every other natural language.
A cool disambiguation feature in Latin is the distinction between the possessive pronouns "eius" and "suus", where "suus" is used when referring to possessions of the grammatical subject of the sentence and "eius" when referring to someone else's possessions. While English can specify the former ("his/her/its own"), it doesn't have a straightforward way to show that the possessor is not the subject of the sentence.
You can see the contrast between eius and suus in the text of the Magnificat
where "ancillae suae" ('his handmaiden') occurs in a sentence whose grammatical subject is God, but "nomen eius" ('his name') in a sentence whose grammatical subject is the name. And sure enough, there is an actual disambiguation between the subject of a sentence and someone else later on:
Suscepit Israel, puerum suum, recordatus misericordiae suae, sicut locutus est ad patres nostros, Abraham et semini eius in saecula.
He [God] has taken up Israel, his [God's] servant, remembering his [God's] mercy, as he [God] said to our ancestors, Abraham and his [Abraham's] seed forever.
In this case "his mercy" and "his seed" refer to God's mercy but Abraham's seed, but there is no referential ambiguity about that in the Latin because one is "misericordiae suae" and the other is "semini eius".
This is a phenomenal oral history of the events, learning of, and response to the events of 9/11.
If an emergency is a circumstance in which ongoing realities are emergent and aren't predictable based on previous experience or world-models, this is a phenomenal example of those dynamics.
Commander Anthony Barnes: That first hour was mass confusion because there was so much erroneous information. It was hard to tell what was fact and what wasn’t. We couldn’t confirm much of this stuff, so we had to take it on face value until proven otherwise.
I've long maintained that the first signs of a disaster tend to be:
- Information doesn't add up.
- Communications are completely severed.
- Old models of understanding don't apply.
- Old filters or sources of information don't apply, and information overload is experienced because it's not clear what to ignore or what to trust.
Our world models give us the means to process and parse information, but also, critically, let us discard extraneous information at little or no cost. When we're placed in unfamiliar or extraordinary circumstances, "foreign territory" as Col. Bob Maar put it, old models do not hold.
To achieve your goal, you will need to change the very fundamental fabric of humanity and the reality of resource scarcity. Here are some examples that can lead to war: economic gain,territorial gain, religion, nationalism, revenge, civil strife, defensive. Below are a few expressions of what it can look like:
* Actor A burned your crops -> Do nothing (appeasement)
* Actor B disrupted your servers causing economic losses -> Do nothing (appeasement)
* Actor C stole your technology and sold it to everyone by a fraction of your price -> Do nothing (appeasement)
* Actor D manipulated a group within your nation to cause internal conflict -> Do nothing (appeasement)
* Actor E invaded your land and claimed as their -> Do nothing (appeasement)
* Actor F used their position of power to gain concessions from you -> Do nothing (appeasement)
* Actor G defamed you, leading to society admonishing you and potentially imprisonment -> Do nothing (appeasement)
Each Actor can be seen as both individuals or nations that are in existence today. Every one has their own set of interests and rationale for acting in certain ways.
I'd love to see one of these for Australian English, although I don't think it would be possible. Aussies have a love of metaphor a simile that's unmatched in any other dialect of English, especially if you hang out with bogans (basically Australian Rednecks, for lack of a better description). It's basically the opposite of America, where instead of trying to be excessively polite, we try to be excessively offensive. The other habits of Australians are to shorten words down and add an "O" to the end, and to swear excessively.
Here's a couple of my favourites:
"Not here to fuck spiders" => Not here to waste time
"Mad as a cut snake" => Very angry
"Busier than a one armed brickie in Baghdad" => Very busy (a brickie is a bricklayer)
"Built like a brick shithouse" => A large, muscular man
"Crack the shits" => get annoyed
"Dog act" => something done to a friend that's uncalled for, e.g. skipping your round at the pub
"Stitch up" => A scam or a trick
"Sick cunt" => An outstanding person
"Old mate" => Someone who's not your friend
"Smoko" => Morning break at work
"Misso" => Girlfriend (short for missus)
"Ambo" => Paramedic (short for ambulance)
"Servo" => Service Station (gas station for Americans)
"Seppo" => American
"Pokies" => Poker machines
"Yeah nah" => either yes, or no, depending on context. Can also be used as filler in a sentence.
This leads to beautiful sentences, such as:
"Yeah nah, the misso cracked the shits at me 'cause I spent the whole arvo at the pokies"
This question sounds like it's pointed directly at me.
However, I can only speak for one AAA gaming company, and my team operate a bit differently than most in the company.
My team operates the infrastructure for "Tom Clancy's The Division" video game series (1&2).
Most of the programming approach is spent on doing the cheapest (in terms of CPU) possible thing, everything is C++
Things like: matchmaking will happen ideally on a single machine with no shared state, everything will happen in memory, which is much faster and can be more reliable than any distributed state or failover.
(it's less reliable if you're in matchmaking and the server or service dies; But then everyone's client will reconnect to the newly scheduled matchmaking instance and populate in memory state.)
We use a lot of windows, nearly every machine that doesn't handle state is a windows server. This has pros and cons, from my ops perspective I try to treat windows like cattle, but windows doesn't like that. they have their own way of operating fleets of machines which include using SCCM and packaging things. There's nice GUI's, but we use saltstack and we removed AD, because it was a huge burden to: create a machine, link it to AD, reboot it, finally get a machine worth using.
From a dev perspective, Windows is good, IO Completion Ports is superior to the linux epoll in terms of interface and performance, so we can have machines that take 200,000+ connections.
How you decide which dedicated server you connect to is up to your client, it does a naive check as it's logging in where it will do a TLS handshake with a random gameserver in a region, for each region. (during the login phase we send your client a list of all currently active gameservers and an int to represent the region).
This works fine until there's packet loss on a particular path because your single ping might be fine but overall your experience could be better elsewhere; if you're not able to ping anything then we fall back on geoip.
That said, if you have friends on another datacenter, we try to put you on the same server. So that if you join groups or whatever then it's just a phase transition rather than a full server sync.
Everything is orchestrated with a "core" set of servers which handle player authentication and matchmaking centrally, then each of the gameserver datacenters (geographically distributed to be closer to players) is attached via ipsec VPN.
In Division 2 we spread out into GCP as well as having physical machines, so we developed a custom auto-scaler. The autoscaler checks the capacity of a region and how many players are currently in a region, keeps a record over 10 minutes and makes a prediction. If the prediction goes over the current capacity in 20 minutes or less, it will create a new instance (since making windows servers on GCP takes longer than linux servers)
If the prediction goes lower than the capacity of a server, it will send a decomission request to the machine, which takes up to 3hrs to complete (to give people time to leave the server naturally).
Idk, I've been doing this for 5 years now so I can talk at length about how we do it, but ultimately out biggest challenges are the fact that we can't use cool shit or new trends because latency matters a lot and we use windows everywhere.
--
As an aside; the overwhelming majority of other ubisoft games (excption: For Honor) use something very similar to what we released open source in collaboration with google to do matchmaking: https://agones.dev/site/
It is a bug, but in the CPU. Until recently, on intel cpu, the popcnt instruction had an implicit dependency on its output register, which can be a performance issue. Explicitly clearing the register (which is very fast and almost free), is an easy way to workaround it.
> The colorful court case, held in Jacksonville, Florida, started September 10, 2001, the day before terrorists crashed planes into the World Trade Center, the Pentagon, and a field in Pennsylvania. The stunned news media quickly forgot about the McDonald’s trial, which explains why so few Americans remember the scandal, or how it ended.
In the months/years after 9/11 I remember that a recurring theme in longform stories was that their events took place shortly before or after 9/11 and had collectively been forgotten. One that I still remember is a Sports Illustrated feature about 8 Wyoming college cross-country runners who died in the worst vehicle crash in Wyoming history [0]. Though maybe in today's 24/7+ media cycle and attention deficits, plenty of interesting stories slip through the cracks on a more regular basis.
I think of it this way:
How can I build this so that it only solves today’s problems but doesn’t make it overly difficult to solve tomorrow’s problems?
Loose coupling, dependency injection, composition over inheritance, and similar techniques tend to be good answers to this question in my experience.
In contrast, over engineering attempts to solve tomorrow’s problems before they arrive and, if they arrive differently than predicted, makes them harder to solve, because when you have to change something it’s less clear which parts of the design were necessary to solve the original problem and which were only necessary for the future problem that was incorrectly predicted. Often times you might end up having to rethink the entire architecture rather than having the relatively simple problem of adjusting things to meet new requirements.
The most important operation in QNX is MsgSend, which works like an interprocess subroutine call. It sends a byte array to another process and waits for a byte array reply and a status code. All I/O and network requests do a MsgSend. The C/C++ libraries handle that and simulate POSIX semantics. The design of the OS is optimized to make MsgSend fast.
A MsgSend is to another service process, hopefully waiting on a MsgReceive. For the case where the service process is idle, waiting on a MsgReceive, there is a fast path where the sending thread is blocked, the receiving thread is unblocked, and control is immediately transferred without a trip through the scheduler. The receiving process inherits the sender's priority and CPU quantum. When the service process does a MsgReply, control is transferred back in a similar way.
This fast path offers some big advantages. There's no scheduling delay; the control transfer happens immediately, almost like a coroutine. There's no CPU switch, so the data that's being sent is in the cache the service process will need. This minimizes the penalty for data copying; the message being copied is usually in the highest level cache.
Inheriting the sender's priority avoids priority inversions, where a high-priority process calls a lower-priority one and stalls. QNX is a real-time system, and priorities are taken very seriously. MsgSend/Receive is priority based; higher priorities preempt lower ones. This gives QNX the unusual property that file system and network access are also priority based. I've run hard real time programs while doing compiles and web browsing on the same machine. The real-time code wasn't slowed by that. (Sadly, with the latest release, QNX is discontinuing support for self-hosted development. QNX is mostly being used for auto dashboards and mobile devices now, so everybody is cross-developing. The IDE is Eclipse, by the way.)
Inheriting the sender's CPU quantum (time left before another task at the same priority gets to run) means that calling a server neither puts you at the end of the line for CPU nor puts you at the head of the line. It's just like a subroutine call for scheduling purposes.
MsgReceive returns an ID for replying to the message; that's used in the MsgReply. So one server can serve many clients. You can have multiple threads in MsgReceive/process/MsgReply loops, so you can have multiple servers running in parallel for concurrency.
This isn't that hard to implement. It's not a secret; it's in the QNX documentation. But few OSs work that way. Most OSs (Linux-domain messaging, System V messaging) have unidirectional message passing, so when the caller sends, the receiver is unblocked, and the sender continues to run. The sender then typically reads from a channel for a reply, which blocks it. This approach means several trips through the CPU scheduler and behaves badly under heavy CPU load. Most of those systems don't support the many-one or many-many case.
Somebody really should write a microkernel like this in Rust. The actual QNX kernel occupies only about 60K bytes on an IA-32 machine, plus a process called "proc" which does various privileged functions but runs as a user process. So it's not a huge job.
All drivers are user processes. There is no such thing as a kernel driver in QNX. Boot images can contain user processes to be started at boot time, which is how initial drivers get loaded. Almost everything is an optional component, including the file system. Code is ROMable, and for small embedded devices, all the code may be in ROM. On the other hand, QNX can be configured as a web server or a desktop system, although this is rarely done.
There's no paging or swapping. This is real-time, and there may not even be a disk. (Paging can be supported within a process, and that's done for gcc, but not much else.) This makes for a nicely responsive desktop system.
Linux came after the BSDs, so you would think the BSDs would have won.
There are many reasons Linux-based systems are generally much more popular than the BSDs in the server and workstation spaces. Here's why I think that happened:
* GPL vs. BSD license. Repeatedly someone in the BSD community had the bright idea of creating a proprietary OS based on a BSD. All their work was then not shared with the OSS BSD community, and the hires removed expertise from the OSS BSD community. In contrast, the GPL forced the Linux kernel and GNU tool improvements to stay in the community, so every company that participated improved the Linux kernel and GNU tools instead of making their development stagnate. This enabled the Linux kernel in particular to rocket past the BSDs in terms of capabilities.
* Bazaar vs. Cathedral. The BSDs had a small group who tried to build things elegantly (cathedral), mostly in "one big tree". GNU + Linux were far more decentralized (bazaar), leading to faster development. That especially applies to the Linux kernel; many GNU tools are more cathedral-like in their development (though not to the extent of the BSDs), and they've paid a price in slower development because of it.
* Multi-boot Installation ease. For many years Linux was much easier to install than the BSDs on standard x86 hardware. Linux used the standard MBR partitioning scheme, while the BSDs required their own scheme that made it extremely difficult to run a BSD multi-boot setup. For many people computers (including storage) were very expensive - it was much easier to try out Linux (where you could dual-boot) than BSDs. The BSDs required an "all-in" commitment that immediately caused many people to ignore them. I think this factor is underappreciated.
* GNU and Linux emphasis on functionality and ease-of-use instead of tiny-ness. GNU tools revel in all sorts of options (case in point: cat has numerous options) and long-name options (which are much easier to read). The BSDs are often excited about how small their code is and how few flags their command lines have... but it turns out many users want functionality. If tiny-ness is truly your goal, then busybox was generally better once that became available circa 1996 (because it focused specifically on tiny-ness instead of trying to be a compromise between functionality and tiny-ness).
Some claim that the AT&T lawsuit hurt the BSDs, but there was lawsuit-rattling for Linux and GNU as well, so while others will point to that I don't think that was serious factor.
amount of untrusted dll injections to mod, by default, unmoddable games; 3rd party VR tools and drivers; video and audio multiplexer drivers; compatibility drivers for normally unsupported console cameras/controllers etc; ultra demanding last gen console emulators; macro tools that are essentialy keyloggers; anti-cheat daemons running as admin to read memory of other processes;
Windows gaming is wild! I literally sell my soul to gain a few more fps or immersion. Browser js looks almost too innocent in this whole mess.
Thunderbird was always the lesser entity within the Mozilla infrastructure, and there have long been influential engineers within the core Gecko development cohort who did not think that Gecko engineers should be attempting to do anything to support Thunderbird. (Of course, there were other engineers who would happily write the patches themselves to fix Thunderbird if informed they broke it, so this is by no means a universal opinion within Mozilla).
In and around 2012, Mozilla was deeply terrified of its sliding market share and felt that the best way forward was to move into the mobile market. The initial XUL-based Firefox for Android proved to be a veritable resource hog, which made Mozilla push a lot harder and faster both in ending its support for XUL, and in revamping its browser engine to work a lot better at mobile devices. One of the results of this effort was the attempt to make a mobile OS, running on the most pitiful smartphones, that would use Gecko as its main application environment. Thunderbird, being a large, featureful, monolithic application with a 15-year-old codebase [1], was (and still is) not really suitable for use on a mobile operating system, and it's probably easier to rewrite the code from scratch than to attempt to modify it for use on mobile devices.
The then-CEO (his name escapes me at the moment) of Mozilla in 2011 was a big advocate of the push to mobile, and continuing support for Thunderbird was the sort of distraction that looks like a big cost that it couldn't afford in a time of crisis. So it's not surprising that they decided to stop paid development, although the announcement came as a shock to the people working on it: one person had been hired with the expectation that he would soon be leading a time of people, only to find that he would become the sole paid engineer on the codebase.
Originally, when Mozilla made the announcement, they promised that there would be a skeleton crew that would do minimal maintenance of the project. A developer, release engineer, QA person, and a build engineer if I recall all the roles correctly. Over the next few years, the people who were still doing paid development were essentially told to stop working on Thunderbird stuff, so that even the skeleton crew promised proved not to last long. In the meantime, though, several community members (myself included) were providing most of the legwork to actually maintain the project.
One of the other things that Mozilla promised was that money donated specifically for Thunderbird would remain in a pool that could only be used for Thunderbird. It took a few years for the community (many thanks especially to rkent's efforts in actually getting this to happen) to actually get access to these finances, but by around 2017, we were able to finally start getting contracts for people to actually work on Thunderbird on a paid basis. There's actually no contribution from Mozilla, except for the free hosting it provides as well as the time that core Gecko developers are willing to spend on contributing to Thunderbird.
[1] In terms of oldest Mozilla code still in active use, Thunderbird has some contenders. There are several pieces that are essentially unchanged from the public CVS 1.1 revision in 1999, and some code I found that is substantially similar to the Netscape Classic code once on MXR--I don't know if that was the 4.x version or the abandoned 5.x version. Some comments in libmime have a date in 1997.
I think they should give the option of completing a larger project on their own instead of whiteboarding questions. For example, Symantec gave a practice problem where you had to basically build a mini-virus checker with wildcards (* and ?) and they selected the submissions with the fastest times to interview. I really enjoy performance-based problems and trying to optimize threading, memory, and caching but whiteboard interviews don't really give me that opportunity to showcase those talents. I like solving big problems and whiteboard questions really test your ability to memorize trivial ones.
The actual interview was much less whiteboarding and more explaining your code to make sure you actually wrote it. It felt like a more classical interview where they aren't trying to see if you are full of BS because they already had an actual work sample that was representative of an actual type of problem likely to be experienced on the job.
Avoiding weird solutions by adding appropriate constraints is extremely important to people who solve optimization problems in practice. The classic example from the inventor of linear programming is the diet problem [1], where the naive LP suggested to eat nothing but bouillon cubes or drink 500 gallons of vinegar.
So, as a level N nerd, you can convert miles to kilometers by rounding to a nearby fibonacci number, and then finding the NEXT fibonacci number (and maybe fudging a bit in the direction of rounding).
Then, as a Level N+1 Nerd, you can realize that the Fibonacci Base exists, in which any integer can be represented as a collection of distinct fibonacci numbers. (for example, 43 = 34 + 8 + 1, or, using a binary string to show which Fibs are involved, 42 = 10010001.) The conversion of miles to kilometers is then just a bit-shift operation.
Well, almost, anyway.