It's been shown in other fields that training models on the output of other models produces subtly broken models, not a flattening to the statistical mean. Why would science be different?
Ideally the test should include the number of bit errors that were corrected using on-disc ECC. This could then also be used to estimate disc lifetime (preferably using multiple samples).
16 bit programming kinda sucked. I caught the tail end of it but my first project was using Win32s so I just had to cherry-pick what I wanted to work on to avoid having to learn it at all. I was fortunate that a Hype Train with a particularly long track was about to leave the station and it was 32 bit. But everyone I worked with or around would wax poetic about what a pain in the ass 16 bit was.
Meanwhile though, the PC memory model really did sort of want memory to be divided into at least a couple of classes and we had to jump through a lot of hoops to deal with that era. Even if I wasn't coding in 16 bit I was still consuming 16 bit games with boot disks.
I was recently noodling around with a retrocoding setup. I have to admit that I did grin a silly grin when I found a set of compile flags for a DOS compiler that caused sizeof(void far*) to return 6 - the first time I'd ever seen it return a non power of two in my life.
Where sublimation is the redirection of socially unacceptable impulses or desires into socially acceptable actions, desublimation refers to the acceptance of these impulses and desires, removing the energies otherwise available for higher goals.
I ran this at one time but it was a bit unstable. I remember corresponding with one of the authors who remarked that it was also attempting to emulate the stability of Windows 95. This was ... oh gawd ... back in 1997 or 1998 I think.
reply