I've been experimenting with similar concept myself. The linter loop is the only thing that can keep the agent sane in my opinion, and if anyone can generalize bun+tsc loop to other tasks, this would finally be a way to trust LLMs output.
I was annoyed at how Claude Code ignores my CLAUDE.md and skills, so I was looking for ways to expand type checking to them. So I wrote a wrapper on top of claude-agents-sdk that reads my CLAUDE.md and skills, and compiles them into rules - could be linter rules or custom checking scripts.
Then it hooks up to all tools and runs the checks. The self improving part comes if some rule doesn't work: I run the tool with the session id in review mode, it proposes the fixes and improves the rule checkers. (not the md files) So it's kinda like vibe coding rules, definitely lowers the bar for me to maintain them. Repo: https://github.com/chebykinn/agent-ruler
Hm, speculating a bit, but it feels like NTSYNC is essentially a beginning of NT Subsystem for Linux, or maybe ntoskrnl as a kernel module. Feels like the most clean and fast way to port Windows, since the rest of the interfaces are in the user space in real Windows.
Essentially should be almost without overhead: user: [gdi32.dll,user32.dll,kernel32.dll -> ntdll.dll] -> kernel: [ntoskrnl.ko]
UEFI switches the CPU into 32bit v86 mode or directly in 64bit mode and you can't go back to real mode without a CPU reset, which v86 won't allow (you don't have ring -2 privileges) and 64bit mode can't do at all. I don't have a UEFI system, so I might be wrong (I even hope I'm wrong - it would mean slightly more freedom still exists), but from what I read about it, I'm 90% certain it's not possible.
You're confusing several things here. The only x86 processor that didn't allow returning to real mode was the 16-bit 80286 - on all later ones it's as simple as clearing bit 0 of CR0 (and also disabling paging if that was enabled).
Nothing more privileged than ring 0 is required for that.
"v86" is what allowed real mode to be virtualized under a 32-bit OS. This is no longer available in 64-bit mode, but the CPU still includes it (as well as newer virtualization features which could be used to do the same thing).
I'm quite concerned about x86 future, but the article has a point if you read it past the title.
It says that x86 is highly standardised - even with different combinations of chips, peripherals and motherboards you know it will work just fine. It's not the case for ARM systems - can you even have something similar to IBM PC with ARM?
I personally know that adding support for ARM devices on Linux is a huge and manual task - e.g. look at devicetree, it's a mess. There is no standard like ACPI for ARM devices, so even powering off the computer is a problem, everything is proprietary and custom.
I don't agree with the article though, x86 is dying and my worry is that ARM devices will bring an end to such an open platform like modern PCs are.
There's a standard like ACPI for Arm devices - it's called ACPI, and it's a requirement for the non-Devicetree SystemReady spec (https://www.arm.com/architecture/system-architectures/system...). But it doesn't describe the huge range of weirdness that exists in the more embedded end of the Arm world, and it's functionally impossible for it to do so while Arm vendors see devices as an integrated whole rather than a general purpose device that can have commodity operating systems installed on them.
> can you even have something similar to IBM PC with ARM
Yes, it's called SBBR which requires UEFI and ACPI. It is more common on server hardware than on consumer-grade embedded devices. The fact that it is not ubiquitous is really holding back ARM.
Will you PLEASE stop promoting UEFI and ACPI?! These are closed-source blobs that the manufacturers will never update and have complete control over the system at ring -2. Why would you even consider it?
Device tree does the same thing and it's open source. Even if you only extract it in binary form a proprietary kernel or uboot, you can decompile it very easily.
> Will you PLEASE stop promoting UEFI and ACPI?! These are closed-source blobs that the manufacturers will never update and have complete control over the system at ring -2. Why would you even consider it?
Well, no, UEFI can have proprietary implementations, but nothing prevents just shipping edk2.
Conversely, there are so many phones with devicetree... and proprietary blobs controlling boot and running beneath ring 0. You're kinda pointing out a real problem, but confusing it with a different part of the stack.
> there are so many phones with devicetree... and proprietary blobs controlling boot and running beneath ring 0
It's not the same!
A bootloader, once it loads the kernel and executes it, is overwritten in memory. No trace of it remains while the sistem is running - until the next boot. UEFI / ACPI / SMM continue to execute on the CPU after it finished booting, "under" the kernel, preempting it as they please.
I mean, it's not uniformly literally technically identical, but yes, an Android phone running its OS in ring 0 while other OSs run in other VMs on top of the EL2 (ring -1) hypervisor under the supervision of the Secure Monitor in L3 (ring -2) is very much in the same boat.
What? You can build an entirely free UEFI. ACPI has a free compiler and a free interpreter. Neither implies or requires the existence of non-free blobs, and neither implies or requires any code running in a more privileged environment than the OS.
I've a bunch of devices running coreboot with a Tianocore payload, but they're largely either very weird and now unavailable or I haven't upstreamed them so it's not super helpful, but it's absolutely not impossible and you can certainly buy Librebooted devices
Let's say some hw manufacturer would open-source the required specs to implement it on it's chips. (Very unlikely, but let's say they do...) So what? Dangerous capabilites remain.
Until UEFI and secure boot, SMM would run code provided by the BIOS. BIOS was updatable, moddable, replaceable. See coreboot and numerous BIOS mods such as wifi whitelist removal.
Trustzone usually runs code from eMMC. These chips are programed in factory with a secret key in the RPMB partiton. It's a one-time operation - the user can't replace it. Without that key you can't update the code Trustzone executes. Only the manufacturer can update it.
Also, any ring -2 code can be used for secure boot locking the device to manufacturer approved OS, enforce DRM, lock hardware upgrades and repairs, spy, call home, install trojans by remote commands, you name it. And you can't audit what it does.
To respond in more detail: secure boot (as in the UEFI specification) does nothing to prevent a user from modifying their system firmware. Intel's Boot Guard and AMD's Platform Secure Boot do, to varying degrees of effectiveness, but they're not part of the UEFI spec and are not UEFI specific. I have replaced UEFI firmware on several systems with Coreboot (including having ported Coreboot to that hardware myself), I am extremely familiar with what's possible here.
> Trustzone usually runs code from eMMC.
This might be true in so far as the largest number of systems using Trustzone may be using eMMC, but there's nothing magical about eMMC here (my phone, which absolutely uses Trustzone, has no eMMC). But when you then go on to say:
> Without that key you can't update the code Trustzone executes. Only the manufacturer can update it.
you're describing the same sort of limitation that you decried with SMM. As commonly deployed, Trustzone is strictly worse for user freedom than SMM is. This isn't an advantage for Arm.
> Also, any ring -2 code can be used for secure boot locking the device to manufacturer approved OS
No, the secure boot code that implements cryptographic validation of the OS is typically running in an entirely normal CPU mode.
> enforce DRM
This is more typical, but only on Arm - on x86 it's typically running on the GPU in a more convoluted way.
> lock hardware upgrades and repairs
Typically no, because there's no need at all to do any sort of hardware binding at that level - you can implement it more easily in normal code, why make it harder?
> spy
When you're saying "can be used", what do you mean here? Code running in any execution environment is able to spy.
> call home
Code in SMM or Trustzone? That isn't literally impossible but it would be far from trivial, and I don't think we've seen examples of it that don't also involve OS-level components.
> install trojans by remote commands
Again, without OS support, I'm calling absolute bullshit on this. You're going to have an SMM trap on every network packet to check whether it's a remote command? You're going to understand a journaling filesystem and modify it in a way that remains consistent with whatever's in cache? This would be an absolute nightmare to implement in a reliable way.
> And you can't audit what it does.
Trustzone blobs do have a nasty habit of being encrypted, but SMM is just… sitting there. You can pull it out of your firmware. It's plain x86, an extremely well understood architecture with amazing reverse engineering tools. You can absolutely audit it, and in many ways it's easier to notice backdoors in binary than it is in source.
Trustzone is mostly deployed on Devicetree-based platform. What saves you here isn't the choice of firmware interface, it's whether the platform depends on hostile code. If you don't care about secure boot (or if you do but don't care about updating the validation keys at runtime), you can implement a functional UEFI/ACPI platform on x86 with zero SMM.
I appreciate your detailed reply. I think we are looking from different perspectives. You are correct in an item-by-item way, but you need to put them all together and see the bigger picture. In my comment, it may have made a mess confusing technologies and their capabilities, but I was looking at the forest, not the trees.
There are only two viable firmware alternatives in the world right now: ring 0 U-boot* or the ones that use ring -2: UEFI* and various bootloaders +TrustZone in Android world (read the footnotes!). Manufacturers usually focus on only one of the two: either ring -2 (locked bootloaders, UEFI +ACPI +SMM +whatever crapware they may want to add) protected by secure boot or ring 0 U-boot +a device tree +their GPL source code. The ones interested in locked-down platforms choose the ring -2 option and they are not going to make it open source, nor provide the signing keys to allow it to be replaced by FOSS alternatives.
I appreciate freedom. Linux kernel is free (ring 0). U-boot and coreboot are free (ring -2 if they include ACPI / SMM, else still ring 0). When I run a Linux kernel, I don't want it preempted and sabotaged by a ring -2 component. If that ring -2 includes proprietary blobs, then it's a hard "no" from me. You may argue that SMM (and ACPI) brings useful features such as overheating shutdown when the kernel froze/crashed or the system is stuck at bootloader, but let's face it: practically there's no free alternative to manufacturer's blobs when it comes to ring -2. The FOSS community barely keeps u-boot and the device tree working. Barely! An open source UEFI + all that complexity for every single board out there is a no-go from the start. If you ported Coreboot, i'm sure you know how difficult it is.
I recently learned that ACPI can be decompiled to source code, so that's an improvement, but not by much. Unlike a device tree, which is only a hardware description, ACPI is executable code. I see that as a risk and I'm not the only one. Even Linus had something to say about it - the quote is on wikipedia article. Some of that code executes in ring -2. It can also install components in the OS - spyware components - you can also read about that in the wikipedia article. U-boot has the capability of creating files on some filesystems and you can argue that a proprietary fork could maliciously install OS components by dropping something in init.d, but I've never heard of it being misused that way, and a manufacturer must publish the GPL source code, so it would be difficult to hide. A device tree can't to that at all. If you use UEFI, then every single blob published by the manufacturer must be decompiled and be inspected. U-boot + ACPI is probably simpler than porting Coreboot, but it still won't happen. There are simply too many systems to support.
So, as a conclusion. I see ring -2 as a dangerous capability (even if the malware itself doesn't execute in ring -2) because there are no viable open source alternatives. For this reason I encourage you to not support or promote UEFI and ring -2.
> Trustzone is strictly worse for user freedom than SMM is. This isn't an advantage for Arm.
> Trustzone is mostly deployed on Devicetree-based platform.
True, but ARM world still has unlocked CPUs that can boot unsigned firmware. There are none left in x86 world. (Or at least none that I know about.)
> No, the secure boot code that implements cryptographic validation of the OS is typically running in an entirely normal CPU mode.
OK, valid observation, I may have used "ring -2" to describe features that are not typically running in ring -2. I tried to avoid these technologies as much as possible and I don't have much hands-on experience about what runs where.
> you can implement a functional UEFI/ACPI platform on x86 with zero SMM.
One dev could probably implement and maintain one or maybe 5-10 systems if they are related (same CPU, mostly same hardware). How many systems are there and how many devs? Not possible, but for very very few exceptions, as long as some random dev got one of these systems for himself and does it as a pet project.
----
* When I say U-boot, I mean mainline U-boot plus a device tree, or forks with pubished GPL source code. I know U-boot can include ACPI and secure boot, but that's not what I mean in the context of this comment. Sure, you can set up secure boot with open source U-boot if you want to. There's nothing wrong with that.
* When I say UEFI, I mean all related technologies: ACPI, SMM, secure boot, signed firmware, etc. The whole forest.
> can you even have something similar to IBM PC with ARM?
AFAIK, ARM does not have port mapped i/o, so that makes it difficult to really match up with the PC. That said, an OS can require system firmware to provide certain things and you get closer to an IBM like world. Microsoft requires UEFI for desktop Windows (maybe WP8 and WM10 as well, but I believe those were effectively limited to specific qualcomm socs, whereas I feel like Desktop windows is supposed to be theoretically open to anything that hits the requirements).
ACPI for ARM is a thing that exists, but not all ARM systems will have it. Technically, not all x86 systems have it either, but for the past several generations of Intel and AMD, all the hardware you need for ACPI is embedded in the CPU, so only old hardware or really weird firmware would be missing it. Also, PC i/o is so consistent, either by specification or by consensus, that it's easy to detect hardware anyway: PCI is at a specific i/o port by specificiation; cpu ID/MSR lets you locate on-chip memory mapped perhipherals that's aren't attached via PCI, and PCI has specificied ways to detect attached hardware. There's some legacy interfaces that aren't on PCI that you might want, and you need ACPI to find them properly, but you can also just poke them at their well known address and see if they respond. AFAIK, you don't get that on other systems... many perhipherals will be memory mapped directly, rather than attached via PCI; the PCI controller/root is not at a well known address, etc, every system is a little different because there's no obvious system to emulate.
Mostly ACPI is about having hardware description tables in a convenient place for the OS to find it. Certainly standardized understanding of power states and the os-independent description of how to enter them is important too.
There are/were other proposals, but if you want something like UEFI and ACPI, and you have clout, you can just require it for systems you support. The market problem is Apple doesn't let their OS run on anything non-Apple, and Android has minimal standards in this area; whereas the marketplace for software for the IBM PC relied heavily on the IBM BIOS, the marketplace of software for Android relies on features of the OS; SoC makers can build a custom kernel with the hardware description hardcoded and there's no need to provide an in firmware system of hardware description. Other OSes lose out because they too need custom builds for each SoC.
> my worry is that ARM devices will bring an end to such an open platform like modern PCs are.
Modern PCs are NOT open platform anymore. Not since signed bootloaders, UEFI, secure boot. ARM on the other hand, as long as they don't require signed bootloaders (like phones) or a closed source driver for GPU or something, are in fact open.
You can still boot Linux on PCs though. ARM devices, you're SOL in most cases. Device tree is a total shit show. For random ARM device, better hope randomInternetHero42 on a random forum has it for your device. Just asking the device itself what exists would be stupid question in ARM world.
I don't know what you're talking about. If the device boots, you find the device tree in /sys/firmware/fdt, or in unpacked human-readable form in /sys/firmware/devicetree/* .
And you're stuck with whatever fucked up kernel the vendor gave you, assuming they even followed their obligations and gave you access to the source. The vast majority of x86 systems run mainline kernels because there's a sufficient level of abstraction. The number of Arm devices that's true for is a tiny percentage of the Arm devices out there running Linux.
RISC-V has a beautiful license, but it is one of the ugliest and least efficient computer ISAs ever designed.
Any competent computer engineer can design a much better ISA than RISC-V.
The problem is that designing a CPU ISA is easy and it can be done in a few weeks at most. On the other hand, writing all the software tools that you need to be able to use an ISA, e.g. assemblers, linkers, debuggers, profilers, compilers for various programming languages etc. requires a huge amount of work, of many man-years.
The reason why everybody who uses neither x86 nor Arm tends to use RISC-V is in order to reuse the existing software toolchains, and not because the RISC-V ISA would be any good. The advantage of being able to use already existing software toolchains is so great that it ensures the use of RISC-V regardless how bad it is in comparison with something like Aarch64.
The Intel ISA, especially its earlier versions, has also been one of the ugliest ISAs, even if it seems polished when compared to RISC-V. It would be sad if after so many decades during which the Intel/AMD ISA has displaced other better ISAs, it would eventually be replaced by something even worse.
As one of the main examples of why RISC-V sucks, I think that any ISA designer who believes that omitting from the ISA the means for detecting integer overflow is a good idea deserves the death penalty, unless the ISA is clearly declared as being a toy ISA, unsuitable for practical applications.
Any competent computer engineer can design a much better ISA than RISC-V.
Hello, my fellow bitter old man! I have to respectfully disagree, though. Firstly, RISC-V was actually designed by competent academic designers with four preceding RISC projects under their belt. The tenet of RISC philosophy is that the ISA is designed by careful measurement and simulation: the decisions are not supposed to be based on gut feeling or familiarity, but on optimizing the choices, which they arguably did.
Specifically, about detecting the overflow: the familiar, classic approach of a hardware overflow (V) flag is well known to be suboptimal, because of its effect on speculative and OoO implementations. RISC-V has enough primitives to handle an explicit overflow checking, and they are consistent with performance techniques such as branch prediction and macro fusing, to the point of having asymptotically vanishing cost--there can be no performance penalty. Even more so, the RISC-V code that does NOT care about overflow can completely ignore these checks.
I think that any ISA designer who believes that omitting from the ISA the means for detecting integer overflow is a good idea deserves the death penalty
Given that the C standard (C99 §3.4.3/1) declares integer overflow to be UB which means the compiler can and often will do anything it damn well pleases with your code, I can understand why the RISC-V designers, under the influence of the stupidity of the C standard, could leave out overflow detection. I'm not saying it's a good idea, in fact it's complete and utter braindamage, but I can see where they got it from.
>but it's a lot more instructions so it won't be used in practice.
It will be used when it needs to be handled. e.g. where elsewhere, an exception would actually handle it. Which is seldom the case.
More instructions doesn't mean slower, either. Superscalar machines have a hard time keeping themselves busy, and this is an easily parallelizable task.
>The designers of RISC-V included the bare minimum needed to compile C, everything else was deemed irrelevant.
Refer to "Computer Architecture: A Quantitative Approach" by by John L. Hennessy and David A. Patterson, for the actual methodology followed.
What does that mean in a world where writing software just got a few orders of magnitude cheaper? An Andrew Huang could create a new ISA replete with everything and get it done.
It didn't though.
Not good software at least.
AI (which is what I'm guessing you're referring to here) is simply incapable of writing such mission -critical low-level code, especially for a niche and/or brand new ISA. It simply can't. It has nothing to plagiarize from, contrary to the billions of lines of JavaScript and python it has access to.
This kind of work can most definitely be AI-assisted, but my estimate is that the time gained would be minimal.
An LLM is able to write some functional arduino code, maybe even some semi-functional bare-metal esp32 code, but nothing deeper than that.
RISC-V is the ARM mess taken to extremes. From TFA:
It’s a crapshoot. That’s why whenever anyone recommends a certain cool Arm motherboard or mini PC, the first thing you have to figure out is what its software support situation is like. Does the OEM provide blessed Linux images? If so, do they offer more than an outdated Ubuntu build? Have they made any update promises?
Almost every ARM board I've got is running ancient kernel images that were out of date even when they were released and haven't got any newer since then, but that's positively great compared to the RISC-V situation where you feel like you're taking your life into your hands every time you try and update it. The last update I did, to a popular widely-used board, took close to a full day to progressively reflash different levels of boot loaders and kernel images and whatnot, repartition the MTD for each reflash, hack around hardware and boot params via the serial interface through trial-and-error, and slowly work my way up to a current already out-of-date firmware and kernel config.
I really hate to like x86 but I know that when I set up an embedded x86 device it's flash, apt-get update/upgrade, and I've got the latest stuff running and supported.
Frankly, I don't like this kinds of takes. Yes, people are seeing more spam in their pull requests, but that's just what it is - spam that you need to learn how to filter. For regular engineers who can use AI, it's a blessing.
I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.
I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.
Well that’s the problem. AI is really good at making things that bypass people’s heuristics for spam.
Someone can spam me with more AI slop than I can vet and it can pass any automated filter I can setup.
The solution is probably closed contributions because figuring out good contributors at scale sounds like figuring out how to hire at scale, which we are horrible at as an industry.
I've submitted it to web store, but I'm sure that the review will be very long. The extension requires a lot of permissions, with this kinds of things personally I'll trust more if i can build from source.
I was annoyed at how Claude Code ignores my CLAUDE.md and skills, so I was looking for ways to expand type checking to them. So I wrote a wrapper on top of claude-agents-sdk that reads my CLAUDE.md and skills, and compiles them into rules - could be linter rules or custom checking scripts. Then it hooks up to all tools and runs the checks. The self improving part comes if some rule doesn't work: I run the tool with the session id in review mode, it proposes the fixes and improves the rule checkers. (not the md files) So it's kinda like vibe coding rules, definitely lowers the bar for me to maintain them. Repo: https://github.com/chebykinn/agent-ruler
reply