Hardware is faster, but the "abstraction tax" is higher than ever.
As someone currently fighting to shave megabytes off a C++ engine, it hurts my soul to see a simple chat app (Electron) consume 800MB just to idle. We spent the last decade using Moore's Law to subsidize lazy garbage collection, five layers of virtualization, and shipping entire web browsers as application runtimes. The computer is fast, but the software is drowning it.
> Introducing a library with two GitHub stars from an unknown developer
I'd still rather have the original than the AI's un-attributed regurgitation. Of course the fewer users something has, the more scrutiny it requires, and below a certain threshold I will be sure to specify an exact version and leave a comment for the person bumping deps in the future to take care with these.
> Introducing a library that was last updated a decade ago
Here I'm mostly with you, if only because I will likely want to apply whatever modernisations were not possible in the language a decade ago. On the other hand, if it has been working without updates in a decade, and people are STILL using it, that sounds pretty damn battle-hardened by this point.
> Introducing a library with a list of aging unresolved CVEs
How common is this in practice? I don't think I've ever gone library hunting and found myself with a choice between "use a thing with unsolved CVEs" and "rewrite it myself". Normally the way projects end up depending on libraries with lists of unresolved CVEs is by adopting a library that subsequently becomes unmaintained. Obviously this is a painful situation to be in, but I'm not sure its worse than if you had replicated the code instead.
> Pulling in a million lines of code that you're reasonably confident you'll never have a use for 99% of
It very much depends - not all imported-and-unused code is equal. Like yeah, if you have Flask for your web framework, SQLAlchemy for your ORM, Jinja for your templates, well you probably shouldn't pull in Django for your authentication system. On the other hand, I would be shocked if I had ever used more than 5% of the standard library in the languages I work with regularly. I am definitely NOT about to start writing my rust as no_std though.
> Relying on an insufficiently stable API relative to the team's budget, which risks eventually becoming an obstacle to applying future security updates (if you're stuck on version 11.22.63 of a library with a current release of 20.2.5, you have a problem)
If a team does not have the resources to keep up to date with their maintenance work, that's a problem. A problem that is far too common, and a situation that is unlikely to be improved by that team replicating the parts of the library they need into their own codebase. In my experience, "this dependency has a CVE and the security team is forcing us to update" can be one of the few ways to get leadership to care about maintenance work at all for teams in this situation.
> Each line of code included is a liability, regardless of whether that code is first-party or third-party. Each dependency in and of itself is also a liability and ongoing cost center.
First-party code is an individual liability. Third-party code can be a shared one.
It’s partly that, but it’s also partly that the quality SUCKS. I’m frustrated with AI blogspam because it doesn’t in any way help me figure out whatever I’m researching. It’s such low quality. What I want and need is higher quality primary sources — in depth research, investigation, presented in an engaging way. Or with movies and shows, I want something genuine. With a genuine story that feels real, characters that feel real and motivated.
AI is fake, it feels fake, and it’s obvious. It’s mind blowing to me that executives think people want fake crap. Sure, people are susceptible to it, and get engaged by it, but it’s not exactly what people want or aspire to.
I want something real, something that makes me feel. AI generated content is by definition fake and not genuine. A human is by definition not putting as much thought and effort into their work when they use AI.
Now someone could put a lot of thought and effort into a project and also use gen AI, but that’s not what’s getting spammed across the internet. AI is low-effort, so of course the pure volume of low effort garbage is going to surpass the volume of high effort quality content.
So it’s basically not possible to like what AI is putting out, generally speaking.
As a productivity enhancer in a small role, sure it’s useful, but that’s not what we’re complaining about.
> that takes continual work, so the only way to make that possible is to somehow provide ongoing funding.
Not really, perpetuities have existed for a long time in finance, even longer has the concept of ‘time value of money’ existed.
You can turn $3m in revenue today into a US treasury bond portfolio that delivers $120k a year. That’s enough to pay for maintenance and minor development of new features.
You can also say: I’ll just charge 120k a year in fees infinitely. But it has the same present value (see time value of money) as 3m today. These worlds are interchangeable, only in the upfront world there is no risk that some of your customers walk away at some point making further upkeep untenable for the remaining customers.
> Internet but not for any other commerce. If you sell products to someone in the EU, you are liable to EU laws about that product category and commercial activity. The internet is the only exception, and that has caused a lot of problems.
I don't know the state of play now, and I do know that things have gotten more that way over time. But the traditional approach to international product sales is that the importer is responsible. That originally meant the person who physically brought it into the country. As common carriers became more common, it meant the person who ordered the thing. That's occasionally been leavened by some consideration of whether the seller specifically targeted customers in the receiving country. And nowadays there's more of a tendency to start "blaming" sellers in some cases, probably because nowadays "importing" something is often a retail order from a specific consumer, as opposed to somebody bringing in a shipping container on spec to resell. Maybe some of those changes are appropriate, but it's just not true that physical goods have always been treated the way you want Web sites treated here, or even that they're mostly treated that way now.
> IP block is the currently the only reasonable way to apply laws to internet based commerce.
"IP block" works in both directions.
If you want to keep something out of your country, you should be responsible for blocking it, not the other way around. That's not necessarily easy, but it's less costly in total than demanding that every Web site enforce every country's regulation... and it has the advantage of putting the cost of a regulation on the people imposing it, which is where it belongs.
> It has its flaws in accuracy; but ISPs could easily create a system to make them more reliable for IP lookup.
From your use of the word "easily", I conclude that you personally would not be among those responsible for making that work.
> Arguing that websites cannot be regulated outside of country of origin is an insane position to take with even the minimal level of hypothetical reasoning of what that would imply.
First, you can in fact "regulate" by blocking, without trying to extend the reach of your laws outside of your border. Your claim that a regulator is left totally powerless is just false.
Second, in practice, that "hypothetical" is pretty close to what we have now, and even closer to what we had 10 years ago. The world did not end.
> UKs laws are dumb, but they should be free to enforce them for websites operating in the UK
Sure, as long as we recognize that "operating in the UK" properly means "is physically located in or controlled from the UK" and not "happens to be accessible to people in the UK". The latter definition would indeed be insane.
It's also because around 20 years ago there was a "reset" when we switched from x86 to x86_64. When AMD introduced x86_64, it made a bunch of the previously optional extension (SSE up to a certain version etc) a mandatory part of x86_64. Gentoo systems could already be optimized before on x86 using those instructions, but now (2004ish) every system using x86_64 was automatically always taking full advantage of all of these instructions*.
Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.
* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.
** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.
I think that's a little simplistic, I have liberal (in the British English sense) views specifically because I think humanity is fundamentally flawed. If we are all flawed particularly when it comes to wielding power over others, it's self-evident in my opinion that governments should be limited and the total power any individual or institution can amass should have a hard ceiling. I see explicit anti-authoritarianism as a necessary counterweight to our flawed nature, every exercise of political power is potentially harmful but through the ideas developed in the Enlightenment it can at least be contained and controlled.
Humans are inherently flawed and they're inherently kind. We're evolutionarily primed for competition and cooperation. Antisocial behaviour can be both inherent and environmental. I feel you might be setting up a false dichotomy when the motivations for political beliefs are often pretty complex and varied.
> If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic
For me it's equally that I don't trust any of these service providers to keep maintaining whatever service or model I'm relying on. Imagine if I build a whole entire process and then the bubble bursts and they either take away what I'm using or start charging outrageous amounts for it.
I feel we are well into the point where the base technology is useful enough and all the work is in how you implement and adapt it in to your process / workflow. A new model coming out that is 3% better is relatively meaningless compared to me figuring out how better to integrate what I already have which might give me a 20% bump for very little effort.
So at this point all I really want is stability in the tech so I can optimise everything else. Constant churn of hosted providers thrusting change at me every second day is actively harmful to my productive use of it at this point. Hence I want local models so I can just tune out the noise and focus on getting things done.
They want to use Facebook, Instagram, TikTok... The exact services that wouldn't exist in the first place if there wasn't for the open neutral Internet, something they didn't care about too.
It should be a "right to not have product forced on you." When I buy a device, whether it is a car, a refrigerator, or an application, I want that thing that I saw in the store, as it exists on the store shelf, including the features and capabilities. I do not expect that I am going to maintain some kind of ongoing relationship with the manufacturer where they get to modify my device at their whim over the air.
Manufacturers should feel free to offer updates. If the user feels the tradeoffs make sense, then they should be free to accept updates. But this business where the manufacturer thinks they are somehow entitled to mess around with a product you've already purchased from them has got to end. It's not their product anymore, it's yours.
In service to the pun, there is a relatively famous demo of using erlang for embedded development where they show off hot code reloading of a drone's flight software while it's in flight.
Storage is not needed. You can consume solar power as it's generated, and it is as useful then as if it came from oil, gas or coal.
When the sun goes down, you have saved tons of oil, gas and goal that didn't have to burn during the day. Which is very very good. You don't have to "solve nighttime" before solar makes sense, it makes sense immediately.
Edit: And of course, nighttime is also being solved, in many ways, already.
The problem with the idea of "base power" is that in the past, the cheapest electricity was from the big thermal generators that take a day to warm up, and operate most cheaply by pumping out at maximum efficiency 24 hours a day. You could then layer on the more expensive electricity sources that could spin up in 15 minutes or a few hours, and match the demand curve, as long as you matched baseload generators to the minimum of the demand curve, and have a cost-optimal electricity mix.
Now that we have cheaper sources of energy for parts of the day, "base" power is a much less desirable concept. It's gone from a simple and straightforward optimization problem that a middle-schooler could solve to a cost optimization problem that markets and linear solvers can solve.
Now that we have cheap storage, and solar-plus-storage is cheaper than coal in the UK, the cost optimization is getting simpler: get rid of all the base load coal!
I know of one datacenter that burned down because someone took a dump before leaving for the day, the toilet overflowed, then flooded the basement, and eventually started an electrical fire.
I'm not sure you could realistically explain that as anything. Sometimes ... shit happens.
What is an operating system? At it's core, an OS is a program to run other programs. Yet this Windows program likes to randomly kill all the programs it's supposed to keep running, at night, when it thinks you aren't looking. It literally fails at the most basic purpose of an operating system.
My mother lives in suburban Massachusetts. She always said that she never imagined how it was possible for me to live with two small kids without a car in Berlin.
She came to visit for one month. After the first week she was already comfortably going around with the Ubahn to pick up the kids at school. I have 4 supermarkets less than 150m away from me, so we would walk to do groceries every other day. I spend ~80€/month with taxi rides (for the occasional trip to meet someone in a less convenient place), which is less than what she pays in car insurance alone, not even counting the cost of gas.
At the end of the trip, she got it. Having a car is not a necessity. It should be seen (and taxed) as a luxury.
KDE is, as its name implies, a desktop environment. And it hasn't been "infected" by the "mobile" virus.
I often wondered why desktop UIs became so terrible somewhere in the 2010s and I don't want to attribute it to laziness, greed, etc... People have been lazy and greedy since people existed, there must have been something else. And I think that mobile is the answer.
UI designers are facing a really hard problem, if not impossible. Most apps nowadays have desktop and mobile variants, and you want some consistency, as you don't want users to relearn everything when switching variants. But mobile platforms, with their small touchscreens are completely different from desktop platforms with their large screens, keyboards and mice. So what do you do?
In addition to mobile, you often need to target the browser too, so: native desktop, native mobile, browser desktop, browser mobile. And then you add commercial consideration like cost, brand identity, and the idea that if you didn't change the UI, you didn't change anything. Commercial considerations have always been a thing, but the multiplication of platforms made it worse, prompting for the idea of running everything in a browser, and having the desktop inferface just being the mobile interface with extra stuff.
There's a lot of good work here and I don't want to minimise the issue in any way but: unless the Windows ACPI stack is implemented in an extremely fucked up way, I'd be surprised if some of the technical conclusions here are accurate. (Proviso: my experience here is pretty much all Linux, with the bits that aren't still being the ACPI-CA stack that's used by basically every OS other than Windows. Windows could be bizarre here, but I'd be surprised if it diverged to a huge degree)
AML is an interpreted language. Interrupts need to be handled quickly, because while a CPU is handling an interrupt it can't handle any further interrupts. There's an obvious and immediate conflict there, and the way this is handled on every other OS is that upon receipt of an ACPI interrupt, the work is dispatched to something that can be scheduled rather than handled directly in the interrupt handler. ACPI events are not intended to be performance critical. They're also not supposed to be serialised as such - you should be able to have several ACPI events in flight at once, and the language even includes mutex support to prevent them stepping on each other[1]. Importantly, "Sleep()" is intended to be "Wake me when at least this much time has passed" event, not a "Spin the CPU until this much time has passed" event. Calling Sleep() should let the CPU go off and handle other ACPI events or, well, anything else. So there's a legitimate discussion to be had about whether this is a sensible implementation or not, but in itself the Sleep() stuff is absolutely not the cause of all the latency.
What's causing these events in the first place? I thought I'd be able to work this out because the low GPE numbers are generally assigned to fixed hardware functions, but my back's been turned on this for about a decade and Intel's gone and made GPE 2 the "Software GPE" bit. What triggers a software GPE? Fucked if I can figure it out - it's not described in the chipset docs. Based on everything that's happening here it seems like it could be any number of things, the handler touches a lot of stuff.
But ok we have something that's executing a bunch of code. Is that in itself sufficient to explain video and audio drops? No. All of this is being run on CPU 0, and this is a multi-core laptop. If CPU 0 is busy, do it all on other cores. The problem here is that all cores are suddenly not executing the user code, and the most likely explanation for that is System Management Mode.
SMM is a CPU mode present in basically all Intel CPUs since the 386SL back in 1989 or so. Code accesses a specific IO port, the CPU stops executing the OS, and instead starts executing firmware-supplied code in a memory area the OS can't touch. The ACPI decompilation only includes the DSDT (the main ACPI table) and not any of the SSDTs (additional ACPI tables that typically contain code for additional components such as GPU-specific methods), so I can't look for sure, but what I suspect is happening here is that one of the _PS0 or _PS3 methods is triggering into SMM and the entire system[2] is halting while that code is run, which would explain why the latency is introduced at the system level rather than it just being "CPU 0 isn't doing stuff".
And, well, the root cause here is probably correctly identified, which is that the _L02 event keeps firing and when it does it's triggering a notification to the GPU driver that is then calling an ACPI method that generates latency. The rest of the conclusions are just not important in comparison. Sleep() is not an unreasonable thing to use in an ACPI method, it's unclear whether clearing the event bits is enough to immediately trigger another event, it's unclear whether sending events that trigger the _PS0/_PS3 dance makes sense under any circumstances here rather than worrying about the MUX state. There's not enough public information to really understand why _L02 is firing, nor what is trying to be achieved by powering up the GPU, calling _DOS, and then powering it down again.
[1] This is absolutely necessary for some hardware - we hit issues back in 2005 where an HP laptop just wouldn't work if you couldn't handle multiple ACPI events at once
[2] Why the entire system? SMM is able to access various bits of hardware that the OS isn't able to, and figuring out which core is trying to touch hardware is not an easy thing to work out, so there's a single "We are in SMM" bit and all cores are pushed into SMM and stop executing OS code before access is permitted, avoiding the case where going into SMM on one CPU would let OS code on another CPU access the forbidden hardware. This is all fucking ludicrous but here we are.
That's not the problem. There is a cultural (and partly technical) aversion in JavaScript to large libraries - this is where the issue comes from. So, instead of having something like org.apache.commons in Java or Boost in C++ or Posix in C, larger libraries that curate a bunch of utilities missing from the standard library, you get an uncountable number of small standalone libraries.
I would bet that you'll find a third party `leftpad` implementation in org.apache.commons or in Spring or in some other collection of utils in Java. The difference isn't the need for 3rd party software to fix gaps in the standard library - it's the preference for hundreds of small dependencies instead of one or two larger ones.
I’m not familiar with pthread_cancel, but I am with TerminateThread. It’s not something that can be used safely: ever. Raymond Chen has written a few times about it, including the history.
> Originally, there was no TerminateThread function. The original designers felt strongly that no such function should exist because there was no safe way to terminate a thread, and there’s no point having a function that cannot be called safely. But people screamed that they needed the TerminateThread function, even though it wasn’t safe, so the operating system designers caved and added the function because people demanded it. Of course, those people who insisted that they needed TerminateThread now regret having been given it.
It’s really not, nuclear inherently requires extreme costs to operate. Compare costs vs coal which isn’t cost competitive these days. Nuclear inherently need a lot more effort refining fuel as you can’t just dig a shovel full of ore and burn it. Even after refining you can’t just dump fuel in, you need fuel assemblies. Nuclear must have a more complicated boiler setup with an extra coolant loop. You need shielding and equipment to move spent fuel and a spent fuel cooling pond. Insurance isn’t cheap when mistakes can cost hundreds of billions. Decommissioning could be a little cheaper with laxer standards, but it’s never going to be cheap. Etc etc.
Worse, all those capital costs mean you’re selling most of your output 24/7 at generally low wholesale spot prices unlike hydro, natural gas, or battery backed solar which can benefit from peak pricing.
That’s not regulations that’s just inherent requirements for the underlying technology. People talk about small modular reactors, but small modular reactors are only making heat they don’t actually drive costs down meaningfully. Similarly the vast majority of regulations come from lessons learned so yea they spend a lot of effort avoiding foreign materials falling into the spent fuel pool, but failing to do so can mean months of downtime and tens of millions in costs so there isn’t some opportunity to save money by avoiding that regulation.
Yes, that’s my point. They are scary - memorably so - in a way that very few other forms of power generation are. The closest equivalent I can think of is a major hydroelectric dam breaking.
Also remember that at each major incident, despite the failures that led to it, people fought tirelessly, in several cases sacrificing themselves, to reduce the scope of the disaster. Each of them could have potentially been worse. We are lucky in that the worst case death figures have not been added to the statistics.
These disasters were huge, newsworthy and alarmingly regular. People read about those getting sick and dying directly as a result. They felt the cleanup costs as taxpayers. They saw how land became unusable after a large event, and, especially terrifying for those who had lived as adults through Cold War, saw the radioactive fallout blown across international borders by the wind.
It’s not Greenpeace or an anti-nuclear lobby who caused the widespread public reaction to nuclear. It was the public reaction seeing it with their own eyes, and making an understandable decision that they didn’t like the risks.
Chernobyl was one hammer blow to the coffin lid, Fukushima the second, but nuclear power was already half-dead before either of those events, kept alive only by unpopular political necessity.
I’m not even anti-nuclear myself, but let’s be clear: the worldwide nuclear energy industry is itself to blame for the lack of faith in nuclear energy.
We can argue about "biggest" all day long but UTF-16 is a huge design failure because it made a huge chunk of the lower Unicode space unusable, thereby making better encodings like UTF-8 that could easily represent those code points less efficient. This layer-violating hack should have made it clear that UTF-16 was a bad idea from the start.
Then there is also the issue that technically there is no such thing as UTF-16, instead you need to distinguish UTF-16LE and UTF-16BE. Even though approximately no one uses the latter we still can't ignore it and have to prepend documents and strings with byte order markers (another wasted pair of code points for the sake of an encoding issue) which mean you can't even trivially concatenate them anymore.
Meanwhile UTF-8 is backwards compatible with ASCII, byte order independent, has tons of useful properties and didn't require any Unicode code point assignments to achieve that.
The only reason we have UTF-16 is because early adopters of Unicode bet on UCS-2 and were too cheap to correct their mistake properly when it became clear that two bytes wasn't going to be enough. It's a dirty hack to cover up a mistake that should have never existed.
As someone currently fighting to shave megabytes off a C++ engine, it hurts my soul to see a simple chat app (Electron) consume 800MB just to idle. We spent the last decade using Moore's Law to subsidize lazy garbage collection, five layers of virtualization, and shipping entire web browsers as application runtimes. The computer is fast, but the software is drowning it.