You can combine the Sieve and Wheel techniques to reduce the memory requirements dramatically. There's no need to use a bit for numbers that you already know can't be prime. You can find a Python implementation at https://stackoverflow.com/a/62919243/5987
I'd be interested in seeing an explanation of the code, since it looks pretty incomprehensible to me. Per the arbitrary rules I set for myself, I'm not allowed to precompute/hardcode the wheel (looks like this implementation uses a hardcoded wheel of size 2x3x5=30). I wonder if/by how much the performance would suffer by computing and storing the coprime remainders in memory instead of handing them directly to the compiler.
I wrote this in a semi obfuscated style to make it fit on one screen.
It's indeed a hardcoded 2x3x5 wheel; but I suspect computing all those
constants would have made the program significantly longer.
They've always said you spend a lot more time reading code than writing it. If suddenly you're writing a lot more code, you're going to spend a ton more time reading it.
There's nothing new about this pattern. When the tractor was invented, the farmer didn't get to knock off early. He just started producing 10x more. Then the tractors got bigger and more powerful, and the things you used them with got more sophisticated too and suddenly you're producing 100x more.
And the only people who could afford to tractor at scale are Cargill/Monsanto who bought out most of the small/medium-sized farms while leaving farms that didn't take the offer to slowly die...
And yet there isn't widespread unemployment. Fewer farmers were needed so fewer people became farmers. Food became cheap and plentiful. Everyone else went on to do other things that they couldn't afford to do before. Software will do the same; we will make more software with fewer people and it will become ubiquitous to the point that people will just quickly generate whatever software they need rather than do many monotonous tasks manually.
There's something that tends to go unrecognized, a function of the way our monitors work. Any color that is made of multiple primaries, such as magenta, cyan, or yellow, will naturally be brighter because more photons are emitted from the display. Not twice as bright, since our eye response is non-linear, but noticeably brighter.
Yup. This is precisely why the first image seems to have oscillating brightness, with clear sharp peaks at yellow and cyan. It's because it's not just changing color, it's literally twice as much light. It goes:
Red - 1x
Yellow - 2x
Green - 1x
Cyan - 2x
Blue - 1x
Magenta - 2x
(Of course magenta is not part of the spectrum.)
A very first step towards a better spectrum is just to maintain constant output brightness (accounting for gamma). There will still be perceptual differences in brightness, as we naturally perceive green as brighter than blue.
Obviously this gets taken into account by the time the author gets to the CIE color model. But there are a number of "intermediate" improvements like that, which you can make.
Back in the 1970's I tried to come up with a metric time system by breaking a day into powers of 10. A centiDay was 14.4 minutes.
I realized it would never catch on, because a 30 minute TV show would have to fit into 28.8 minutes, and the only way to do that was to lose a couple of commercials. Never gonna happen.