Taxi doors already automatically open and close in major Japanese metro areas.
There is a certain romance around good service, but the good service is not the reason why people use taxis here.
One could make a similar argument that self service restaurants serving revolving sushi, or tablet ordered sushi miss the good service of a great restaurant. Yet these places are wildly popular, because one goes there to eat.
The watch API has horrible user experience in all platforms. One must send a GET and keep the pipe open, waiting for a stream of responses. If the connection is lost, changes might be lost. If one misses a resource version change, then either the reconnection will fail, or a stale resource will be monitored.
The Java client does this with blocking, resulting in a large number of threads.
I truly like Kubernetes, and I think most detractors' complaints around complexity simply don't want to learn it. But the K8s API, especially the Watch API, needs some rigorous standards.
Interesting point. I feel the same about the old SNES classic, Earthbound.
It was a different perspective on America, making small towns and suburbia (a sometimes looked down upon aspect of the country) look appreciated, cozy, nice.
Japan doesn't have suburbs in the same way as the US, small towns often look and feel the same as the outskirts of major cities; although very small towns, as depicted in the Hamaguchi's most recent film Evil Does not Exist, are qualitatively different from both.
Yes, because they are focused on the relevant parts for solving a specific problem.
So one could say that the article's fundamental flaw is not explaining what is the purpose of the model, therefore we cannot assess if it is capturing the relevant aspects to solve the problem.
I understand that it could have been easier from the start by using a set. But at the time, it wasn't a requirement. Why would you use a set for a single entry?
Furthermore, you didn't need to keep using booleans. You could run a script that reads the boolean and update the new field in the row, and transition your code to use that.
But if the model doesn't reflect reality, then reality breaks the software. That's how bugs happen.
Which means that the non buggy model that best reflect reality, and covers all possible scenarios that will happen in production is the two boolean one, not the enum.
One could invent as many wrong mapping scenarios with enums too. The trap is not the boolean. The fact is that domain modeling is hard, and it becomes even harder if you want to avoid booleans.
Booleans get you an exponentially growing number of states. The failure mode of booleans is for some of those states to be nonsensical. The failure mode of enums [0] is having to enumerate out all of the valid states when that number of valid states is growing exponentially.
The real problem here is the same tension of typed vs untyped programs, applied to data constraints more specific than types. In an untyped language/database/etc, the expectation of isOpen could be updated to include an additional valid state of String("locked") as a third state - ugly but semantically correct (and of course now you have possible logic errors at every test of isLocked based on whether conditionals were properly created based on more than 2 boolean states (and possible how the language shortcuts "naked boolean" conditions)).
In an environment with the ability to express constraints on the data, the state (isOpen=true, isLocked=true) could be prohibited - a straightforward solution that requires some deliberate data modeling work.
[0] correctly used, as in the author's door example when pertaining to a door that automatically unlocks when going open->closed (which is actually relatively rare! so if this had been an instance of modeling the real world, I question whether it would have been actually correct!). Meanwhile the author's PremiumFeature example is really just using an enum to create a booleans in a different form, and doesn't actually support their thesis.
Yeah this feels like a “one way doors” problem. Designing your data models in a way that would cause a later refactor to be painful or impossible is probably the bigger trap.