For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | kibwen's commentsregister

> And as for Rust - that's beyond inexplicable.

No, you appear to have no idea what you're talking about here. Rust abandoned green threads for good reason, and no, the problems were not minor but fundamental, and had to do with C interoperability, which Go sacrifices upon the altar (which is a fine choice to make in the context of Go, but not in the context of Rust). And no, Rust does not today have a green thread implementation. Furthermore, Rust's async design is dramatically different from Javascript, while it certainly supports typical back-end networking uses it's designed to be suitable for embedded contexts/freestanding contexts to enable concurrency even on systems where threads do not exist, of which the Embassy executor is a realization: https://embassy.dev/


> Language designers who studied the async/await experience in other ecosystems concluded that the costs of function coloring outweigh the benefits and chose different paths.

Not really. The author provides Go as evidence, but Go's CSP-based approach far predates the popularity of async/await. Meanwhile, Zig's approach still has function coloring, it's just that one color is "I/O function" and the other is "non-I/O function". And this isn't a problem! Function coloring is fine in many contexts, especially in languages that seek to give the user low-level control! I feel like I'm taking crazy pills every time people harp about function coloring as though it were something deplorable. It's just a bad way of talking about effect systems, which are extremely useful. And sure, if you want to have a high-level managed language like Go with an intrusive runtime, then you can build an abstraction that dynamically papers over the difference at some runtime cost (this is probably the uniformly correct choice for high-level languages, like dynamic or scripting languages (although it must be said that Go's approach to concurrency in general leaves much to be desired (I'm begging people to learn about structured concurrency))).


Or use OCaml 5 which has a full algebraic effects system that solves the function coloring problem while still being highly performant.

How do they solve it?

CSP is a theory about synchronization and implies nothing about green threads or M:N scheduling. Go could have used OS threads and called it CSP.

Certainly it’s true that Go invented neither, both Erlang and Haskell had truly parallel green threads without function coloring before Go or Node existed.


I agree with you, but the big difference between function arguments and effect systems is that the tools we have for composing functions with arguments are a lot simpler to deal with than the tools we have for composing effects.

You could imagine a programming language that expressed “comptime” as a function argument of a type that is only constructible at compile-time. And one for runtime as well, and then functions that can do both can take the sum type “comptime | runtime”.


That is an unfair characterization of Zig. The OP correctly points out:

> Function signatures don’t change based on how they’re scheduled, and async/await become library functions rather than language keywords.

The functions have the same calling conventions regardless of IO implementation. Functions return data and not promises, callbacks, or futures. Dependency injection is not function coloring.


These things _are_ function colouring, but they show function colouring isn't scary or hard.

The original function colouring essay was much more about JavaScript's implementation than a general statement.

If JavaScript had exposed a way for a synchronous function to call back into the runtime to wait for an async function to complete, it would still be just as coloured, but no one would be complaining about colour (deadlocks yes, but that's another kettle of fish).


Boost.Asio (2005) is surely worth a mention. But the pattern predates this by decades. Green threads, what Goroutines are, comes from the 1990's.

I mean Java's Loom feels like the 'ultimate' example of the latter for the _ordinary_ programmer, in that it effectively leaves you just doing what looks like completely normal threads however you so please, and it all 'just works'.

Java has gone full circle.

Java had green threads in 1997, removed them in 2000 and brought them back properly now as virtual threads.

I'm kinda glad they've sat out the async mania, with virtual threads/goroutines, the async stuff just feels like lipstick on a pig. Debugging, stacktrackes etc. are just jumbled.


They stopped at the Promises level with CompletableFuture that lead to "colored frameworks" like WebMVC vs. WebFlux in Spring.

In Rust debugging and stacktraces are perfectly fine because async/futures compile to a perfect state machine.

They are not perfectly fine. If a task panics then you will get the right stack trace, but there is no way to get a stack trace for a task that’s currently waiting. (At least not without intrusive hacks.)

I don't think comparing 97's green threads to virtual threads ever made sense.

Like their purpose/implementation everything is just so different, they don't share anything at all.


Java didn't really "sit it out". It launched CompletableFutures, CompletionStages, Sources and Sinks, arguably even streams. All of those are standard library forms of async programming. People tried to make it catch on, but the experience of using it, The runtime wrapping all your errors in completion exceptions, destroying your callstacks, just made it completely useless.

Every Java codebase using something like Flux serves as a datapoint in favor of this argument - they're an abomination to read, reason about or (heaven help) debug.

I'm curious how escape analysis works with virtual threads. With the asynchronous model, an object local to a function will be migrated to the old generation heap while the external call gets executed. With virtual threads I imagine the object remains in the virtual thread "stack", therefore reducing pressure in garbage collection.

The initial Loom didn't really provide the semantics and ergonomics of async/await which is why they immediately started working on structured concurrency.

And for my money I prefer async/await to the structured concurrency stuff..


What should people read to learn about structured concurrency?

I think the clearest sales pitch comes from this post from the author of Trio, which is an implementation of structured concurrency for Python: https://vorpus.org/blog/notes-on-structured-concurrency-or-g... .

Perhaps java's related JEPs could be a good starting point?

https://openjdk.org/jeps/505

There are also related discussions on other platforms that are worthy to read.


In my experience people complain about it because they are coming from a blocking first mindset. They're trying to shoehorn async calls into an inherently synchronous structure.

A while back I just started leaning in. I write a lot of Python at work, and anytime I have to use a library that's relies on asyncio, I just write the entire damn app as an asynchronous one. Makes function coloring a non-issue. If I'm in a situation where the two have to coexist, the async runtime gets its own thread and communication back and forth is handled at specific boundaries.


>In my experience people complain about it because they are coming from a blocking first mindset. They're trying to shoehorn async calls into an inherently synchronous structure.

There's no "inherently synchronous structure", at least not in Javascript. The nature is synchronous, asynchronous is an illusion built on top of it. Which is why you can easily block an "asynchronous" program:

  while (true) {} 
on any async function will do.

JavaScript execution is synchronous on a single call stack. That's why they added Workers which is different to async.

Rust's Tokio and co are also blocking. You need threads to get something that's not an inherently synchronous with merely a facade or cooperative asychronicity.


> Makes function coloring a non-issue.

Yes, having to rewrite literally all of your code because you need to use an async function somewhere is an issue.

An even bigger issue is that now you have two (incompatible!) versions of literally every library dependency.


I'm usually writing applications, not libraries, so it's a non-issue for me.

I was talking about when writing from scratch.


> It's irrelevant though, you figure it out only when it's a problem.

For the past decade people have been clawing their eyes out over how sluggish their computers have become due to everything becoming a bloated Electron app. It's extremely relevant. Meanwhile, here you are seemingly trying to suggest that not only should everything be a bloated, inefficient mess, it should also be buggy and inscrutable, even moreso than it already is. The entire experience of using a computer is about to descend into a heretofore unimaginable nightmare, but hey, at least Jensen Huang got his bag.


That is the doom side. However AI has found and fixed a lot of security issues. I have personally used AI to improve my code speed, AI can analyze complex algorithms and figure out how to make them much faster in ways I can do as a developer, but it's a lot of work that I typically wouldn't do. Even just writing various targeted benchmarks to see where the problems really are in my code is something I can do, but would be so tedious I often would not bother. I can tell AI to do it and it will write those.

Only time will tell which version of the future we end up with. It could be good or bad and we will have to see.


In terms of runtime performance of applications, AI is a net win. You can easily remove abstractions like Electron, React, various libraries. Just let the AI write more code. You can even do the unthinkable and write desktop native again.

As the article mentions, this is a false dichotomy.

If you're an ordinary person driven to be healthy, drink water. Water is great. If you're already drinking water, you should absolutely not replace it with whatever bottled crap that Coke or Pepsi is peddling, be it "smart water" or otherwise.

But for people with sugar cravings bordering on addiction, which describes a depressingly enormous proportion of the population in the developed world, replacing sugary drinks with zero-calorie artificially-sweetened drinks can be a net health benefit. We know beyond a shadow of a doubt that obesity, diabetes, and heart disease are bad for your health, and consumption of sugar water is a significant driver of these. Yes, you could be even healthier by drinking water instead; see above. But sugar is an addictive chemical (sugar withdrawl is, in fact, a thing), and not everyone is going to quit cold turkey.

(And for the record, I fully agree that people should be more cognizant of their gut biome and how their diet affects it, including being skeptical of aspartame and other random synthetic ingredients.)


> sugar cravings bordering on addiction, which describes a depressingly enormous proportion of the population

It's almost like our bodies are designed to crave calories


Our bodies are designed to crave calories, but habitually ingesting too much sugar is more about hijacking dopamine release pathways than about fulfilling your body's basic need for satiation.

Or... you know, there could be some little actual effort in shedding such addiction (sugar ain't that hard), build a bit of character and walk off better off in many regards. Winning against addiction won't kill you, break you or similar damage but makes you (much) stronger and healthier as a bonus. Why do people shy away from such things?

But no, lets do everything possible just to keep the comfortable crappy couch lifestyle, no sweat, no effort, miserable health, miserable life. Then there are articles how US population (which suffers the most these shit HFCS addictions and resulting obesity problems) is depressed... for many reasons of course, but this sort of helpless victim mindset is one of them.


> Why do people shy away from such things?

Have you ever met someone with a true addiction to food? I'm not talking about someone with a habitual craving for sweets. I'm talking about someone who consumes food compulsively like a chain-smoker; someone who, in the absence of whatever their favorites are, will consume and consume with little regard for what the food is: an entire jar of pickles, multiple pounds of grapes, a whole rotisserie chicken, et al.

I used to be one. I once ate six baked white onions¹ in one sitting before vomiting everywhere and rethinking my life.

I broke through naturally, but I wish GLP-1s had been prevalent at the time. Want to know what made breaking it so challenging?

  1. Unlike other addictions, you have to continue consuming this one or else you will die.

  2. Nearly every social event in the USA is tied in some way to food which means that you have to exercise willpower __constantly__ if you have a social life.

  3. People are more interested in shaming you than supporting you. Most want you to fail.
[1] https://www.youtube.com/watch?v=xV9spqCzSkQ

There's nothing wrong with HFCS either, at least not that isn't also wrong with sugar. This is all just naturalist fallacy stuff.

The HFCS stuff always feels weird to me. Like sure, there's glycemic index impact, it is measurably different, etc... but I feel like people don't realize that "high" fructose is different only by a few percent from table sugar, and is "high" only because it's being compared to regular corn syrup.

Like... HFCS-42 is 42% fructose. That's lower than cane / table sugar, which is 50%. If you really think fructose is the problem, HFCS-42 is an improvement. Or even better, embrace regular corn syrup because it has little to no fructose normally! It's nearly 100% glucose! (This is why 42% is "high")

And if it's glycemic index that people are worried about, throw in a tiny amount of dissolvable fiber in your drink and it'll lower that by more than the sugar balance affects it.

None of it makes sense.


I don't believe it is measurably different! Apart from what you noted (HFCS is "high fructose" relative to normal corn syrup, not table sugar), ordinary sugars are broken down instantly by the human body.

The subtext and I think valid concern about HFCS is that it drastically reduces the cost of calorically sweetening foods and especially beverages.

But people routinely cruise past that to claims that HFCS itself is uniquely harmful to humans, and it isn't, at least no more than sugar is.


I think it's fairly safe to say there's a measurable difference - fructose generally (afaict) has noticeably lower insulin responses compared to glucose. Though it's still very minor compared to the total change vs none of course, and I haven't seen much of anything showing evidence of a benefit compared to the other - just "technically different".

Definitely agreed that there's a weird demonizing of HFCS in particular though. Maybe because it sounds technical? It's easy to point to because it's common, and it doesn't sound "natural".

And personally I don't think HFCS's clear manufacturing benefits really affect much, it's just the most convenient so it's the most used. The addictive qualities of sugar are much more valuable, IMO They™ would continue to sweeten things at the same level even if it were completely banned. They'd just use something else, and sucrose is also very cheap.


On the other hand, allowing people to feed their sweet addictions only re-enforces and desensitizates them further. So while you are probably safe drinking ungodly amounts of aspartame water, you won't find equivalent substitutes for sugar in other foods and you might suffer rebound consumption there, perhaps to a much higher total caloric intake versus just drinking sugary water in moderation.

Another thing to watch out for is caffeine input which is often associated with sweetened drinks. Caffeine is a diuretic and you will see yourself drinking can after can of diet coke while not quite quenching your thirst or properly hydrating yourself. This is documented to lead to intense muscle pain and unexplained migraines for people who do physical work and abuse these types of drinks, and can't be good for your kidneys long term, even under the assumption that sweeteners are 100% safe.

Overall, just drink plenty of water and use everything else in moderation seems like a solid advice.


You may need to sit down for this, but when Yahoo launched, TCL was 6 years old, Perl was 7, and Erlang was 8. Today, Go is 14, Swift is 12, and Rust is 11.

Marching humanoid terminator robots will never be as cheap as a drone. Autonomous suicide drone swarms are what should terrify you.

You say that now, but once we perfect AMBAC technology and accidentally release large numbers of Minovsky particles, we will need humanoid combat vehicles to fight our battles!

> Minovsky particles

I love the way these things always have to have names that sound exotic or menacing to English speakers. Where are the Smith particles or the Jim particles?


Well in this case it was made by and for Japanese speakers.

I guess Russians are scary for everyone. Including Russians, I assume.

All of my Russian friends briefly become terrified when they pass by a mirror, and have to regain their composure.

Not marching, but Ukraine uses continuous track machine gun robots seemingly very effectively. They aren’t suicide ones.

https://archive.is/dpNsN


They are an interesting prospect but their use isn't quite as claimed.

They are extremely vulnerable to the same drones humans are.

It's more along the lines of this is a patch were not expecting active fighting this robot can act as a deterrent and surveillance.

Cheaper and simpler than a loitering IRS drone. But more concentrated in domain.

I believe for a while Samsung developed similar drones for the demilitarised zone in Korea. Those could be static as they were hard wired in.


> They are extremely vulnerable to the same drones humans are.

I am not confident about this. Human gets disabled by few small shrapnel projectiles into soft tissue. It is possible to build way more protected robot, for which you need some direct hit to disable it. That robot could also be very agile: e.g. do some evading jump at the last moment before being hit.


I think you just pitched a Robot Wars revival for 2026.

This article shows them being used for offense.

https://edition.cnn.com/2026/04/20/europe/robots-ukraine-bat...


Autonomous suicide drone swarms are easily countered by autonomous interceptor swarms.

>Marching humanoid terminator robots

ground bots, not necessarily marching, do have their value. They can have bulletproof armor, while still be relatively lightweight and small and fast. They can easily carry even 20-25mm autocannon - very destructive weapon, sometimes can even succeed against a real tank.

And imagine when a swarm of drones lifts a ground bot, brings and drops it right into the needed point and protects it from the enemy drones while the ground bot just destructs the things around. Synergy between different weapons system has always been the super-weapon.


They can also sit in one spot guarding a position without using much battery. Ukraine recently took territory from Russian forces using ground bots, the first time it's been done without using soldiers on the ground. Now they're starting to scale the bots up to mass production.

the issue is remote control. Ground position means a lot of obstacles in addition to the widespread jamming. One can try to control the bot from the fiber-optic controlled drone hanging over, yet such complication has its own drawbacks. That means that ground bots are in real need of making them autonomous.

They don’t need to be remotely controlled anymore! Autonomous!

> Marching humanoid terminator robots will never be as cheap as a drone. Autonomous suicide drone swarms are what should terrify you.

If money or economics were relevant in these decisions, most wars would probably not play out in the first place. Tesla probably wouldn't be worth 1.2T. And we certainly wouldn't see AI buildouts happening at their current rates.

Economics and costs only matter for normal humans, small countries, and efforts that might actually help humanity. They're not seemingly considerations in nefarious applications.


It matters quite a bit. If your drone costs $1000, you can build a thousand times more of them than if a drone costs $1M. As the saying goes, quantity has a quality all its own.

This is a lesson the US has yet to learn, and its military drones are really expensive. Ukraine learned it by necessity, and now it's building millions of drones annually.


On the other hand, if Musk really flips his lid, he's one OTA away from a network of ground-delivered lithium bombs. The fear of humanoid bots is their banality: if a government or private company has a reason to build them, then the world is full of hardware with terrifying capability and questionable security.

I think what your parent commenter means is that, if the application is warlike or nefarious, them the money will be found. If, on the other hand, it is humanitarian, then every penny will be counted.

I disagree. If your charitable application is profitable, it will get funded.

Now, people will hate you for doing a "good" thing for money (exhibit A : name any pharma company selling the drugs that keep people alive ; that company is going to get called a "cynical shill" given enough profit.)

It just happens that the bad things are often highly profitable, so the investors will pour the pensioners money in (because the pension money must flow.)

That being said, the best way to get funded is not for your app to be good or bad, but to be massively fun. Sell tulips, video games and Céline Dion tickets. Find a way to divert 10% of the benefits to a charity.


Yes, I get that, but for whatever amount of money is found, you're better off using it more effectively. The cost of things still matters, if you want to win wars against serious adversaries.

One problem the US has had in its Iran adventure is that they're shooting down $30K drones with million dollar missiles, often several of them. Now the missile stockpiles have been depleted by 30% to 50%, depending on missile type, and they're not all that quick to replace.


> If money or economics were relevant in these decisions, most wars would probably not play out in the first place.

I don't understand what you mean here.

Aren't wars fought over natural resources or the political power over natural resources.

Obviously people sometimes miscalculate but in principle I mean.


> Aren't wars fought over natural resources or the political power over natural resources.

Not really. They’re fought over fear of the future, desire for control and power over other people. “It’s us or them” captures one of the core calculi of war. It’s not rational, it’s just an expression of evolutionary imperatives.


Most military grade drones cost $10k or more and they can only be used once.

An optimized quadruped could probably be built for the same price and have an integrated 60mm mortar instead. The front legs act as the bipod and the rear legs would be designed to dig into the ground for stabilization. The only problem here is reloading the mortar, which could be done using a revolver style magazine. That's 5 shots per robot vs 1 per drone.


Which of those is opening doors?

Two drones. One to blast the door open, the next goes through.

Still more cost effective than a humanoid robot, even in the presence of hundreds of doors.


That breaks the building. If you want to destroy the whole thing, conventional weapons has that covered. Drones can't get through nets and doors. Though, have you considered packs of robot dogs with machine guns and one arm/hand? Cheaper than a fully bipedal humanoid robot.

One thing exists and is known to work and be cheap. The other it's you musing about what will be possible. So they need to be judged differently. No land robot can move through a war environment in any effective way at the moment and also "open doors" etc. They are too slow. Not drones.

> have you considered packs of robot dogs with machine guns

I don't have it to hand but already a few years ago a defense contractor had attached quite a heavy rifle on some sort of articulable mount to the top of something that looked exactly like Boston Dynamic's Spot. I'm not sure how much ammo it was capable of carrying or what it's range was but it's definitely a concerning development. I think I might become an enthusiastic custom anti-materiel rifle collector in the near future.



I'll carry an ammo belt of little EMP devices.

A microwave weapon could be effective. And reusable.

Or they might decide to, er, pre-deliver the payloads.

"Citizen, congratulations on reaching your age of majority. Report for your Patriotic Assurance Implant at surgical bay 43B."


You've made it clear from this thread that you have no idea what you're talking about. Please do not waste our time by commenting on this topic further.

Ha, I did maintain two safe languages. How many did you?

> it is better having automated resource management

Rust's ownership system is automated resource management. What you're asking for is dynamic lifetime determination, which Rust provides via types that opt out of the hierarchical single-ownership paradigm.


Nope, because it has plenty of manual hand holding to keep the compiler happy.

Manual memory management requires the programmer to insert calls to free at specific points in the program. That's manual static lifetime determination. A traditional garbage collector uses runtime analysis to determine when it's safe to call free. That's automatic dynamic lifetime determination. What Rust does is automatic static lifetime determination. Designing your data structures such that they're acyclic is not what anyone means when they say "manual memory management".

Rust requires the programmer to manually design data structures and code algorithms in a way that doesn't trigger borrower checker compile errors.

There is nothing automatic out of it.

Write Rust code, compile error if done incorrectly, manually fix the data structure or algorithm root cause, loop.


> 4x4: Not enough to draw "E", "M" or "W" properly.

However, 5x5 isn't enough to draw "e" properly if you also want lowercase letters to have less height than uppercase, so you need at least 6 vertical pixels. And then that isn't enough to draw any character with a descender properly, so you need at least 7 vertical pixels (technically you should have 8 in order to allow "g" and "y" to have a distinct horizontal descender while still sitting on the baseline, but this is probably an acceptable compromise). And remember that in practice this means you will still need at least 8x6 pixels to draw each character, to allow for a visible gap between letters below and beside them.


> 5x5 isn't enough to draw "e" properly if you also want lowercase letters to have less height than uppercase

It can be enough if you "cheat" and make use of the horizontal space. This is how I did it in my font:

   ##
  # #
  ##  #
   ###

> if you also want lowercase letters to have less height than uppercase

I think that's the least of the properties I'd be willing to sacrifice to have a font that tiny.


I think the `e` looks better in the 'real pixels' example they gave; I find my tends to 'fill in' the space of the top part of the letter, and I suspect in the context of a longer sentence it'd be pretty easy to parse.

(but yeah, it's not quite right, and is especially jarring in the nice, clean, blown up pixels in the top example)


It definitely looks better in the second screenshot than the first, but you have to be very, very close to the screen in order to see individual pixels like that. And on low-res displays, which this sort of font might be necessary for, it's going to look somewhat different because low-res screens tend to be chosen for cheapness, and cheap screens tend to be monochrome, so none of that artistic fuzzy subpixel coloring.

Was it intended that we should fill in the word 'mind' ourselves, to prove your point???

Shipping on water has been, by far, the cheapest mode of long-distance shipping since the moment boats were invented. That is to say, since thousands of years before boats were ever powered by the shit that destroys our lungs.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You