Yeah, similarly, Joe Armstrong (RIP), co-creator of Erlang explained it to me like this:
> In distributed systems there is no real shared state (imagine one machine in the USA another in Sweden) where is the shared state? In the middle of the Atlantic? — shared state breaks laws of physics. State changes are propagated at the speed of light — we always know how things were at a remote site not how they are now. What we know is what they last told us. If you make a software abstraction that ignores this fact you’ll be in trouble.
He wrote this to me in 2014, and it has really informed how I think about these things.
The thing is that go channels themselves are shared state (if the owner closes the channel and a client tries to write you're not gonna have a good time)! Erlang message boxes are not.
Strictly speaking they’re shared state, but the way you model your application around channels is generally to have independent little chunks of work and the channels are just a means of communicating. I know it’s not one-for-one with Erlang.
You can think of closing the channel as sending a message “there will be no further messages”, the panic on write is enforcement of that contract.
Additionally the safe way to use closing of a channel is the writer closing it.
If you have multiple writers, you have to either synchronise them, or don’t close the channel.
Sure but the fact that it is shared state is why you can't naively have a go channel that spans a cluster but Erlang's "actor" system works just fine over a network and the safety systems (nodedowns, monitors etc) are a simple layer on top.
You don't have to close a channel in Go and in many cases you actually shouldn't.
Even if you choose to close a channel because it's useful to you, it's not necessarily shared state. In a lot of cases, closing a channel behaves just like a message in its queue.
Maybe. Or maybe we observe the same point of information source from two points which happen to be distant in the 3-coordinates we are accustomized to deal with, but both close to this single point in some other.
I think its great for recruiting. This signals to the world their investment in making Devs happier (one of top two reasons mentioned was "devs were happier with Kotlin")
This kind of short-sighted, simplistic reasoning / behaviour is what I worry about the most in terms of where our society is going. I always wonder - who will be the people buying or using your software (build very cheaply and efficiently with AI) once they can do the same, or get replaced by AI, or bankrupt themselves?
Everybody seems to be so focused on how to get ahead in race to profitability, that they don't consider the shortcut they are taking might be leading to a cliff.
> You need to wait a bit before you can start faithfully translating the meaning
I guess it's possible that the AI learns about a specific person over time? That way it can be confident about what's being said as soon the person starts saying it
But wouldn't the same issue apply to user-space TCP implementations too?
User-space TCP implementations too could have "path-dependent sequence of accidents" which a power user might eventually need to figure out?
Yes but instead of being a 35-year-old accretion of mistakes, a user-space network stack is likely to be part of a more typical software lifecycle, that gets updated more easily and ultimately replaced. Also such things are dramatically easier to debug.
Southern California. You can't escape compromising on certain factors like costs, but I find it a pleasant place to wake up everyday. As I've gotten older, I've prioritized my daily wellbeing more and more.
I think getting customers to sign up is the hardest part. Next they could start adding opt-in features (probably already in the works?) which cost an extra few dollars a month each?
For Java see The Exceptional Performance of Lil' Exception from Aleksey Shipilëv, https://shipilev.net/blog/2014/exceptional-performance. As always, Shipilëv does a fantastic job at explaining inner details of the JVM and observed performance profile.
A few years ago, I got hit by the high cost of an hidden exception (used for flow control by the JDK) while using LocalDate#format to parse a valid date. It was fun to troubleshoot and fix OpenJDK https://unportant.info/using-exceptions-for-flow-control-is-...
I would be interested in reading similar articles for other languages.
Stack traces. The information required to build a stack trace is deliberately kept off the critical path so it doesn't impact performance during normal operation, but that means that building a stack trace requires going out and fetching the debug symbols and correlating them.
Without stack traces, exceptions are just a type of goto.
While you are absolutely right that collecting stack traces is an extremely costly operation, it's not the only problem. For example, in C++, which doesn't collect any kind of stack trace, throwing an exception is still ~1 order of magnitude slower than returning a value through all the layers. Note that this cost only happens when the exception is thrown; exception-based code is otherwise slightly faster than `if ret < 0` style C code, as the check is entirely omitted.
There is some explanation as to why this happens in this SO response [0]. The gist is that the dynamic nature of exception handling means that the compiler needs to consult runtime type information to decide where to jump when the exception is thrown, which means trawling through some relatively lengthy data structures. Adding to the problem, these data structures are not normally used a lot, so they are very likely not to be cached - though this may change for a program that actually throws exceptions in a hot loop, and the difference may not be as stark.
Ah, fair, I don't have a lot of experience with C++. My answer was based on Java, which from the benchmarks I've seen doesn't suffer from any performance hit when you use a static exception object with no stacktrace.
What about dynamic languages? Are they always collecting the stack and keeping it into some exception object that the exception can grab the data from at any time? And if that's the case wouldn't it always be slow regardless of raising or not ?
A performant dynamic language won't bundle the debug symbols either, so my assumption would be that the performance of an exception would still be bad.
But the problem is, fixing existing broken stuff might not be rewarded as much as working on the "new feature". This means that if I focus on just fixing the broken stuff, I'd be managed out. Even say I put in the extra effort to fix the broken stuff, and deliver my new features, other thousands of engineers won't do that same- meaning I can't make a meaningful dent.
(i dont work for amazon, but I can imagine it's similar everywhere)
> Don't Communicate by Sharing Memory; Share Memory by Communicating
https://www.php.cn/faq/1796714651.html