I agree. And I think this also distills down to Rob Pike’s rule 5, or something quite like it. If your design prioritizes modeling the domain’s data, shaping algorithms around that model, it’s usually trivial to determine how likely some “duplication” is operating on shared concepts, versus merely following a similar pattern. It may even help you refine the data model itself when confronted with the question.
It’s a muscle you can exercise, and doing so helps you learn what to focus on so it’ll be successful. IME a very successful approach is to focus on interfaces, especially at critical boundaries (critical for your use case first, then critical for your existing design/architecture).
Doing this often settles the design direction in a stable way early on. More than that, it often reveals a lot of the harder questions you’ll need to answer: domain constraints and usage expectations.
Putting this kind of work upfront can save an enormous amount of time and energy by precluding implementation work on the wrong things, and ruling out problematic approaches for both the problem at hand as well as a project’s longer term goals.
> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).
No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.
For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.
Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.
The point of a type system isn’t ever that you don’t have to check the things that make a value represent the type you intend to assign it. The point is to encode precisely the things that you need to be true for that assignment to succeed correctly. If everything is in fact modeled as an Option, then yes you have to check each thing for Some before accessing its value.
The type is a way to communicate (to the compiler, to other devs, to future you) that those are the expected invariants.
The check for invariants is trivial as you say. The value of types is in expressing what those invariants are in the first place.
I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.
I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.
You're conflating types with the encoding/decoding problem. Maybe your paying jobs didn't provide you with enough room to distinguish between these two problems. Types can be encoded optimally with a minimally-required bits representation (for instance: https://hackage.haskell.org/package/flat), or they can be encoded redundantly with all default/recovery/omission information, and what you actually do with that encoding on the wire in a distributed system with or without versioning is up to you and it doesn't depend on the specific type system of your language, but the strong type system offers you unmatched precision both at program boundaries where encoding happens, and in business logic. Once you've got that `Maybe a` you can (<$>) in exactly one place at the program's boundary, and then proceed as if your data has always been provided without omission. And then you can combine (<$>) with `Alternative f` to deal with your distributed systems' silly payloads in a versioned manner. What's your dynamic language's null-checking equivalent for it?
With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.
Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.
I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.
Fair enough, though I feel so entirely differently that your position baffles me.
Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.
The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.
Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.
Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.
It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.
I know everyone says that this is a huge issue, and I am sure you can point to an example, but I haven’t found that types prevented a lot of issues like this any better than something like Erlang’s assertion-based system.
It’s not a hack, but you may find more documentation for the equivalent preload values expressed as a <link> tag. There is (near) parity between that and the HTTP Link header. The values used in the article should work in HTML as well.
Disclaimer: I’m a strong advocate for static typing.
I absolutely see the connection. One of the advantages of static typing is that it makes a lot of refactoring trivial (or much more than it would be otherwise). One of the side effects of making anything more trivial is that people will be more inclined to do it, without thinking as much about the consequences. It shouldn’t be a surprise that, absent other safeguards to discourage it, people will translate trivial refactoring into unexpected breaking changes.
Moreover, they may do this consciously, on the basis that “it was trivial for me to refactor, it should be trivial to adapt downstream.” I’ll even admit to making exactly that judgment call, in exactly those terms. Granted I’m much less cavalier about it when the breaking changes affect people I don’t interface with on a regular basis. But I’m much less cavalier about that sort of impact across the board than I’ve observed in many of my peers.
You might want that, I might too. But it’s outside the constraints set by the post/author. They want to establish immutable semantics with unmodified TypeScript, which doesn’t have any effect on the semantics of assignment or built in prototypes.
Well said. (I too want that.) I found my first reaction to `MutableArray` was "why not make it a persistent array‽"
Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.
Marko’s compiler is designed for partial hydration (by default, without any special developer effort), which performs quite well. IIRC they were also looking at implementing “resumability” (term coined by Qwik, for an approach that sidesteps hydration as a concept entirely). I’m not sure where they’re at on that now, but I think it’s generally safe to say that Marko prioritizes load time performance more than nearly all other frameworks.
Lenses is mutation by another name. You are basically recreating states on top of an immutable system. Sure, it's all immutable actually but conceptually it doesn't really change anything. That's what makes it hilarious.
In the end, the world is stateful and even the purest abstractions have to hit the road at some point. But the authors of Haskell were fully aware of that. The monadic type system was conceived as a way to easily track side effects after all, not banish them.
It’s a clear-minded and deliberate approach to reconciling principle with pragmatic utility. We can debate whether it’s the best approach, but it isn’t like… logically inconsistent, surprising, or lacking in self awareness.
You might also think of it a bit like poetry: creativity emerging from the process of working within formal constraints. By asking how you can represent something familiar in a specially structured way, you can learn both about that structure and the thing you're trying to unite with it. Occasionally, you'll even create something beautiful or powerful, as well.
Maybe in that sense there's an "artificial" challenge involved, but it's artificial in the sense of being deliberate rather than merely arbitrary or absurd.
You don’t see what’s hilarious about recreating what you are pretending to remove only one abstraction level removed?
Anyway, I have great hopes for effect system as a way to approach this in a principled way. I really like what Ocaml is currently doing with concurrency. It’s clear to me that there is great value to unlock here.
I don’t agree with your characterization that anyone is “pretending”. The whole point of abstraction is convenience of reasoning. No one is fooling themselves or anyone else, nor trying to. It’s a conscious choice, for clear purposes. That’s precisely as hilarious as using another abstraction you might favor more, such as an effect system.
This is a matter of choice, not something with an objectively correct answer. Every possible answer has trade offs. I think consistency with the underlying standard defining NaN probably has better tradeoffs in general, and more specific answers can always be built on top of that.
That said, I don’t think undefined in JS has the colloquial meaning you’re using here. The tradeoffs would be potentially much more confusing and error prone for that reason alone.
It might be more “correct” (logically; standard aside) to throw, as others suggest. But that would have considerable ergonomic tradeoffs that might make code implementing simple math incredibly hard to understand in practice.
A language with better error handling ergonomics overall might fare better though.
>A language with better error handling ergonomics overall might fare better though.
So what always trips me up about JavaScript is that if you make a mistake, it silently propagates nonsense through the program. There's no way to configure it to even warn you about it. (There's "use strict", and there should be "use stricter!")
And this aspect of the language is somehow considered sacred, load-bearing infrastructure that may never be altered. (Even though, with "use strict" we already demonstrated that have a mechanism for fixing things without breaking them!)
I think the existence of TS might unfortunately be an unhelpful influence on JS's soundness, because now there's even less pressure to fix it than there was before.
> And this aspect of the language is somehow considered sacred, load-bearing infrastructure that may never be altered. (Even though, with "use strict" we already demonstrated that have a mechanism for fixing things without breaking them!)
There are many things we could do which wouldn't break the web but which we choose not to do because they would be costly to implement/maintain and would expand the attack surface of JS engines.
To some extent you’ve answered this yourself: TypeScript (and/or linting) is the way to be warned about this. Aside from the points in sibling comment (also correct), adding these kinds of runtime checks would have performance implications that I don’t think could be taken lightly. But it’s not really necessary: static analysis tools designed for this are already great, you just have to use them!