Honestly even if we're far from FP as mainstream (which I don't even wish). I woudln't have bet a dollar that so much features from lisp/ml would have crossed over this fast (I mean destruct, lambda, rest args, immutable let by default, option chaining ..).
> admittedly feature bloat, ads, tracking, etc. are also a problem, but I think they're mostly orthogonal to under-the-hood performance
I think for webpages it is the opposite: non-orthogonal in most cases.
If you disable your JS/Ad/...-blocker, and go to pages like Reddit, it is definitely slower and the CPU spikes. Even with a blocker, the page still does a thousand things in the first-party scripts (like tracking mouse movements and such) that slow everything down a lot.
Godot Engine was awarded $50,000 in 2019 by Mozilla as part of the Mozilla Open Source Support (MOSS) program. Godot also received a MOSS grant of $20,000 in 2016.
I'm not asking for support. Godot could definitely go the for-profit route and charge for cloud builds and other features like Unity does. I just want to make sure that if I invest in learning the tool as well as I know Unity that it'll be around for the next few years.
Continuing development is continuing development ... support is answering questions on forums, responding to issues on github, community management, that sort of thing.
I could be wrong, but I believe that's what they mean when they say they hired a "generalist" this year to help cover off those things. It's not a direct 1:1 of "I pay, I get support", but donating will be intrinsically linked to it and seems part of their overall plan for the future.
I don't think I've ever seen someone use M, NM, or nmi for nautical mile. I've only ever seen nm. I used nautical miles a lot in a previous career, in the context of aircraft mission planning.
Telling people in the industry to use nmi or NM seems a lot like telling computer people to use MiB or GiB instead of MB or GB, or to pronounce gif "jiff". They're just not going to change.
Reliable software used swapping for years. Artificially having a low virtual memory ceiling by disabling swap is just austerity for almost no benefit.
Paging into swap often has almost no perceptible effect in regular usage unless you are running software with microsecond guarantees on infrequently accessed processes (those that get swapped out).
In 'modern' microservice architectures, this is not true. If you look at the k8s approach, reliable software is created through redundancy. A part of the app being killed by the OOM shouldn't matter, it should automatically be rescheduled on another node.
Kubernetes as a platform recommends disabling swap completely, and you have to explicitly allow nodes to have a swap, otherwise it fails.
This is sane behaviour if you're dealing with a large cluster with a complex architecture you no single person could or should know all the ins and outs of. There is no "let's log on to the machine and see what's happening" when dealing with this types of architectures, even at smaller scale.
And a massive part of Go's target is exactly these modern architectures/workloads...
On linux there is the best of both worlds approach. You can use zram for swap and userspace OOM killer, like earlyoom. If processes start using a bit more memory or leaking they won't get killed and nothing will slow down much, but will get killed if things start to go too far to cause performance problems.
I don’t see how you’d reliably get less than a frame of latency with vsync. If your mouse input lands while you are waiting for the next frame, you will lose at least one whole frame.
In terms of comparison with an FPS, the same amount of responsiveness is required for any UI - ie. there should be no perceptual latency.
Text editors on 144 FPS gsync monitor are noticeably better than vsync 60 FPS in a way in which a BMW door is noticeably better at closing than entry level cars. Technically both work the same and yet you can tell they’re very different.
> The popular "classic" 2d libraries like cairo and skia are very big and complicated to build / depend on
It may be harder than editing a Cargo file, but it is something you do once. They are some of the most mature and tested libraries in the world.
> wgpu will eventually also work on the web, so it's a great option for broad platform compatibility
OpenGL works everywhere today, including the web!
> In terms of efficiency (memory, battery, cpu etc.), the modern graphic APIs have more headroom than the GL-based ones
It doesn't matter for a pixel editor. Even if it did, you would have to do a very good job to beat the driver's GL implementation.
Otherwise, you may end up with something slower or buggier!
> Today, 4K screens are pretty common, and in the future, 144hz monitors will be more and more popular. If I want to render 2d graphics with no latency in those setups, access to the GPU makes things feasible
I don’t disagree that GL would be a fine choice theoretically. I just happen to have a distaste for the OpenGL api. But this was a reply against using something like Cairo as I understood it.
They start thinking they are beyond the normal people and that everything is a joke.