I think there's something important and correct in this take.
As another example to your point, whether a function returns a list of foo's or a single foo is a also a color. You have to do completely different things when calling a function that returns a list compared to one that just returns a single value. And that's not just empty pedantry. Haskell let's you treat "list" as an effect (since it is a monad). Array languages like APL, J or R are colorblind with respect to "a list"/"not a list" as the difference is erased (to a large extend).
So there really is a whole rainbow of function colors (basically types).
On the other hand, this point of view doesn't really invalidate the original coloring argument at all. The effect encapsulated by "async" really is one that can be annoying and one that can actually be made implicit with a result that many would prefer.
This is exactly why I like all of this being handled in the return type. Return a Promise<T> or a Task<T> or a Stream<T> or an IEnumerable<T> as appropriate. Every function always returns immediately, but it might not do the important work immediately -- instead, it plans out the important work, which you can plug together as you see fit with other work and execute if you want, or simply return a composite plan for someone else to execute. All of its various "colors" are explicit in the return type.
Combinatorial explosions are wrapped up in the Ts themselves. Sure, you might sometimes get a Stream<Array<Stream<T>>>, but if that's simply the appropriate type to describe what your function does, then who's complaining? As long as you have various ways to manipulate those types, like "Combine" (Array<Stream<T>> -> Stream<Array<T>>) and "Flatten" (Stream<Stream<T>> -> Stream<T>) then you can handle everything. And you'll find that getting your types correct increasingly implies that your entire code is correct, so the compiler is helping you out a lot.
If you want job1 and job2 to be executed in parallel, you could do this:
set result to [job1(), job2()]
set result1 to the first result
set result2 to the last result
The array expression will wait until all promise values resolve.
This isn't really what hyperscript is designed for, however: its async behavior is designed instead to take the async/sync distinction (and callback hell) out of "normal" DOM scripting. If you have sophisticated async needs then JavaScript is probably a better option. The good news is that hyperscript can call JavaScript functionality in the normal way, so that is an easy option if hyperscripts behavior isn't sufficient for your needs.
Thanks for the reply. I can understand that the purpose here isn't to be as powerful as JavaScript, but rather to be simpler and easier. That being said, is there a way to achieve the below or is that beyond the limits?
You can't achieve that with the intended semantics directly in hyperscript, because it will resolve all argument promises before invoking Promise.race()
You could move that out to JavaScript and then call that function from hyperscript:
set result to jsPromiseRace()
it could also be done as inline js:
js
return Promise.race(job1(), job2());
end
log the result
but I think if you are writing this style of code it makes more sense to just kick out to JavaScript for it.
> For example if a function needs to take a record containing x as a field with type int, it should also accept a record with y as a field that it does not use.
Not sure if I misunderstand what you mean, but this does not require subtyping. One of the key distinguishing features of row polymorphism is that exactly this can be achieved without subtyping. The extra unused fields (`y` in your example) are represented as a polymorphic type variable _instead_ of using subtyping. See for instance page 7 in these slides: https://www.cs.cmu.edu/~aldrich/courses/819/slides/rows.pdf
I agree with the blog post that learning how to learn is an important skill. But the post offers very little beyond a few tips on how to actually achieve that. For people interested in actually learning how to learn I'd recommend the book "Make It Stick: The Science of Successful Learning", which offers a lot of details on this topic based on actual scientific research.
that book seems to be very inline with cutting edge research on learning, thanks
book summary
"Make It Stick: The Science of Successful Learning" Summary:
Key Premise: Effective learning strategies differ from common study methods like rereading and cramming, which provide an illusion of mastery but lead to poor retention.
Main Learning Principles:
Retrieval Practice: Actively recalling information strengthens memory and makes learning more durable. Self-quizzing is more effective than passive review.
Spaced Repetition: Spacing out learning sessions over time leads to better retention than cramming.
Interleaved Practice: Mixing different types of problems or subjects during study sessions improves learning compared to studying one topic in blocks.
Elaboration: Explaining ideas in your own words and connecting them to existing knowledge improves understanding.
Generation: Attempting to solve a problem before being shown the solution enhances learning.
Reflection: Reviewing what you’ve learned and considering how it applies to your life strengthens learning.
Varied Learning: Learning in different contexts and environments makes the knowledge more adaptable and versatile.
Key Takeaways:
Rethink study habits: Active learning techniques outperform passive ones.
Learning is more effective when it's effortful—embrace challenges.
Long-term retention relies on consistent, spaced, and active engagement with material.
That summary is pretty good based on what I remember from the book. I think the second to last point, _effort_, deserves a bit more of an emphasis though. It's actually a common theme through all the effective learning methods that they require more effort and that more effort generally implies more effective learning. As an example, simply rereading a text takes little effort compared to doing flash cards, and the later is more effective.
That's what Steam is. It works on both Linux, Windows, and macOS, and it sells apps for all three platforms. Funding isn't a problem, as Valve takes a cut of sales. The whole thing is only possible though since the supported platforms are not locked down.
> Cosmopolitan Libc makes C a build-anywhere run-anywhere language, like Java, except it doesn't need an interpreter or virtual machine.
WebAssembly would not achieve the same thing as it's in the same category as Java bytecode where you need some interpreter/VM/JIT/compiler to actually run it.
> As for BigInt, I default to it by now and I've not found my performance noticeably worse. But it irks me that I can get a number that's out of range of an actual int32 or int64, especially when doing databases. Will I get tto that point? Probably not, but it's a potential error waiting to be overlooked that could be so easily solved if JS had int32/int64 data types.
If your numbers can get out of the range of 32 or 64 bits then representing them as int32 or int64 will not solve your problems, it will just give you other problems instead ;)
If you want integers in JS/TS I think using bigint is a great option. The performance cost is completely negligible, the literal syntax is concise, and plenty of other languages (Python, etc.) have gotten away with using arbitrary precision bignums for their integers without any trouble. One could even do `type Int = bigint` to make it clear in code that the "big" part is not why the type is used.
I think you answered your own question. It's the standard average-time analysis of Quicksort and the (unmentioned) assumption is that the numbers are from some uniform distribution.
Why would the distribution have to be symmetric? My intuition is that if you sample n numbers from some distribution (even if it's skewed) and pick a random number among the n numbers, then on average that number would be separate the number into two equal-sized sets. Are you saying that is wrong?
With real numbers, I have the same intuition. But with integers, where 2 or more elements can be exactly the same, and with the two sets defined as they are defined in TFA, that is one "less than" and one "greater or equal", then I'd argue that the second set will be bigger than the former.
In the pathological case where all the elements are the same value, one set will always be empty and the algorithm will not even terminate.
In a less extreme case where nearly all the items are the same except a few ones, then the algorithm will slowly advance, but not with the progression n, n/2, n/4, etc. that is needed to prove it's O(n).
Please note that the "less extreme case" I depicted above is quite common in significant real-world statistics. For example, how many times a site is visited by unique users per day: a long sequence of 1s with some sparse numbers>1. Or how many children/cars/pets per family: many repeated small numbers with a few sparse outliers. Etc.
Speaking of bignum libraries, I recently watched a talk with Rob Pike where he mentioned that one thing he regretted about Go was not making the default integer implementation arbitrary precision. Supposedly the performance overhead for normal numbers is very small, and you avoid the weirdness and complicated semantics of fixed precision integers. I found that to be quite fascinating, especially coming from a "low-level guy" like Rob Pike. Ever since I've been wanting a language with that feature and to understand how bignum implementations work.
It's a pretty common feature on high-level languages. Python, Ruby, Lisp(s), Haskell, etc, all use arbitrary-precision integers by default. Even JavaScript has integers now (since 2020), not as the default number type, but with the `n` suffix, like `42n`.
That's cool. I didn't realize that those languages used arbitrary-precision integers by default. I know that many language offer a bigint but to me the difference between having bigints and them being the default seems significant. For instance, in JavaScript the `n` notation and them being called `bigint` (and not just `int`) is going to mean that they will get used very rarely.
Yeah i absolutely agree. The saddest cases for me are languages like Java, which comes with working arbitrary-precision integers on its standard library, but using them feels so second-class compared to the "primitive" types like `int`. You need to import the class; initializing a value looks like `var a = new BigInteger("39")`; they don't support normal math operators, so you need to do `a.add(new BigInteger("3"))` just for a simple addition; and so on.
It's difficult to argue that arbitrary-precision integers are "simpler" (because they don't have an overflow edge-case on almost every operation) when they are so much more inconvenient to use.
Common Lisp has been like this forever. It makes testing the language much easier, as you don't have to check for overflows when constructing test code.
A dynamic language like CL has the added requirement that the integers fit with everything else under a single top type, which in practice limits fixnums to a few bits less than the word size.
CL also lets you "switch off" multiple precision if you know you don't need it. You can start with bignums everywhere then declare certain functions to use fixnums after you profile your program to speed it up. Compilers like SBCL can then produce code pretty much as fast as C.
You can write a macro to duplicate a piece of code with different type declarations, and switch between them at entry depending on the types of various values. Code blowup, yes, but you can optimize the common cases without manually duplicating code.
GNU MP's C++ interface makes this mostly transparent, expressions like "x = y * z" just work due to operator overloading. The only time you need to know that the underlying class is "mpz_class" instead of "int" is at input/output.
For what it's worth, Haskell's Integer type is that way.
edit To be clear, in Haskell you 'default' to using its arbitrary-precision type, unlike say Java where they're available in the standard library but not in the core primitive types.
Smalltalk is another language that has had this feature since forever.
Tagged pointers / SmallInteger encodings make the memory overhead zero to negligible, and with the CPU / memory-bandwidth gap being what it is, the few extra CPU operations rarely matter.
Daniel Bernstein made the argument that integers should default to these semantics for security reasons:
eh, I disagree. It's not massive overhead, but it turns literally every integer operation into a branch. Sure the branch is going to get ~100% prediction accuracy if you aren't using bigints, but that's still a pretty big cost especially since the branches inserted make all sorts of compiler optimizations illegal.
When I write I use Neovim as it's what I'm used to and the formats that I write in are non-WYSIWYG formats like Markdown or LaTeX. My partner is a non-technical person and an author who writes fiction books. She uses Scrivener for that. From what I can see it's a very high quality piece of software for that purpose.
As another example to your point, whether a function returns a list of foo's or a single foo is a also a color. You have to do completely different things when calling a function that returns a list compared to one that just returns a single value. And that's not just empty pedantry. Haskell let's you treat "list" as an effect (since it is a monad). Array languages like APL, J or R are colorblind with respect to "a list"/"not a list" as the difference is erased (to a large extend). So there really is a whole rainbow of function colors (basically types).
On the other hand, this point of view doesn't really invalidate the original coloring argument at all. The effect encapsulated by "async" really is one that can be annoying and one that can actually be made implicit with a result that many would prefer.