For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more rvrb's commentsregister

matklad did it justice in his post here, in my opinion

https://matklad.github.io/2025/08/09/zigs-lovely-syntax.html



I know they’re different workflows entirely, but you could start dabbling with Bitwig, which is really good and runs on both macOS and Linux, then eventually switch when you feel like you’re out of the workflow hole

But to be honest, I’m still using Bitwig on a Mac for my studio despite having switched everywhere else to Linux


Good point, the database written in Zig with a bug mascot tells us nothing about writing a database in Zig without a bug mascot


im not sure i trust any hot takes from a person that doesn't know anything about anime


As someone quite happy with a vanilla Fedora Silverblue install on both my desktop and laptop, can anyone explain why I would rebase to Bluefin instead? It seems like there must be technical merit to the Universal Blue spins beyond adding preinstalled software/configs, but I can't find it, despite looking.


Haven't used Silverblue or Bluefin but the way I've seen it explained is that Bluefin, Aurora, etc include a number of QoL and practicality adjustments that make them nicer/easier to live with as a desktop OS than baseline Silverblue is.


Yes! I used to use Silverblue too. Things like Tailscale, Docker, Davinci Resolve, nvidia drivers, and all codecs are a button-click away or already properly set up for you out of the box on the universal blue projects.


Co-maintainer here. I dogfooded Silverblue for about 2 years before deciding to do this. Initially Bluefin was just a "fix me script" that did the usual bits. When bootc came around this let me put that script in GitHub CI and then just consume the fixes I want. A few of us started to do this and then since a bunch of us were kubernetes nerds we defaulted into "let's make this together."

Here are some of the changes:

- We add all the codecs, and drivers in the build step so the user never has to care.

- We turn on automatic updates by default, these are silent

- We remove Fedora's broken flatpak remote and go full Flathub out of the box

- We handle major version updates for you in CI, there's no "distro release day" update that's just a normal update that day

- Since we use bootc it's easy for people to FROM any of our images and make a custom build, and we ship a template for anyone to do so: https://github.com/ublue-os/image-template

- You can turn on "developer mode" which gives you vscode with devcontainers, docker, incus, etc in addition to podman.

- We integrate homebrew out of the box for package management for the CLI, flathub handles the GUI packages - we don't want to be a distro, in this world the base image is a base image and my relationship is with brew and flathub. I don't need or want to have a relationship with my OS.

- We gate kernel versions to avoid regressions, so we can avoid certain releases or "ride it out" until fixes are published.

- We ship [Bazaar](https://github.com/kolunmi/bazaar) - which is a flatpak only store designed for performance. Since the OS is a different layer we can throw away all those packagekit jankfests and start from scratch.

As for the desktop, I worked on Ubuntu for about a decade and wasn't happy with the direction Ubuntu was going at the time. Fedora had rpm-ostree/bootc but didn't know what to do with it so they were just sitting on the tech. So I just combined them, the desktop has an Ubuntu-like layout and vibe.

The clear benefit is that you have one image for everything, whereas local layering in Silverblue doesn't really make sense to me anymore, if you want to handle a bunch of local packages just use a traditional distro. Because doing that in Silverblue breaks just as often as it does in package distros. Pure image mode is the strongest benefit. It's 2025 I refuse to do "post installation crap" that should be automated, bootc lets me do that!

More info here since I'm leaving out a bunch of stuff: https://docs.projectbluefin.io/introduction


Sounded interesting until you got to homebrew. Have you deactivated its telemetry? Also it talks to github often as well?

Like the dino theme.


Hmm so you don't use rpm-ostree? Or ostree at all? Sorry I'm just having trouble finding some of the technical implementation details, seeing a lot of details on the UX though.


ostree is the library that rpm-ostree and bootc share. However bootc is moving over to composefs as a backend. This effectively makes it distro agnostic and there are communities forming: https://github.com/bootcrew

Fedora still uses rpm-ostree, when you do an update it's pulling from an ostree remote served from a server. bootc replaces that with just an OCI registry. We ship the `rpm-ostree` binary on the systems still. It's still used for things like adding kernel boot arguments.

Here's their diagram: https://bootc-dev.github.io/bootc/filesystem-storage.html

Generally speaking new users can skip the rpm-ostree parts and just start with bootc. I am not an expert in this, there's a rust library in there somewhere. Hopefully someone can help fill in the blanks.


Boots currently relies on OSTree, you use OSTree enable base images. But the work is well on its way to transition to composefs which uses kernel native tech to replicate the features need from OSTree, so any systemd enabled distro can become bootc-able, for a bunch of example see: https://github.com/bootcrew


I can answer for me coming from the same boat... exactly zero risk to try, given how easy it would be to rebase back onto silverblue. I didn't have to worry about the codecs Fedora couldn't legally ship so I could remove an overlay, plus I figured bootc was the future and I wanted to see it working.


Sure there is zero immediate risk, I just genuinely don’t know what I would get out of taking on the risk of adding a community maintained middleman into the supply chain. I know, because rpm-ostree, it’s not the same as some random distribution. If there’s nothing to get out of it personally that layering a package or two can’t give me (or better, writing my own simple image).. why?

I’m not saying there isn’t a reason; I’m genuinely looking for it


Fedora Silverblue is the closest feeling to the macOS experience I fell in love with that I’ve had on Linux in, well, ever. Very happy with it on my desktop and laptop. It’s not perfect but it is less imperfect than modern macOS has become.

Finding a laptop that works well is annoying, however.


> Finding a laptop that works well is annoying, however.

It doesn't exist at the moment. :\

I would pay 2x the price of a macbook for a linux laptop with the same hardware quality.

The battery life and power/efficiency of my m4 pro is insane. It's so good that it's really hard to justify using anything else right now.


It's sad that the best Linux laptop right now arguably is a M4 Mac virtualizing Linux.


Why not run it natively with Asahi Linux?


Well limiting to specifically OP's example (M4 Mac), Asahi doesn't support it yet. :(


Asahi Linux doesn't support the M3 or M4. That said, I'd be curious why OP doesn't consider Asahi on M2 to be a good option. AFAIK the only thing missing at this point is Thunderbolt and USB-C display output (HDMI out works fine).


There are a few draw backs. dnf for arm linux doesn't support Tor Browser yet!! Power saving was quite bad when I tried a few month back. When on sleep mode, it drains more battery than on MacOS sleep mode.

There are a few other compile/transpile bugs here and there.... but I'm rooting for the it!!! Hopefully they can get sorted out soon.


Is Asahi installed side by side on a mac? You pick it at boot? And how “install and just use” it is?


> Is Asahi installed side by side on a mac?

Yes, the installer automatically (and reliably) resizes partitions for you. A minimum of about 70 GB for macOS is needed (anything lower is still possible but unsupported).

> You pick it at boot?

There's a default choice that will boot.

> And how “install and just use” it is?

Probably one of the smoothest Linux installs I've had in 10 years or so, since you just run the installer from macOS instead of flashing ISO files to an USB drive.


Just, FTR, you can also tell the Mac OS to reboot to Asahi, and you can tell Asahi (via CLI) to reboot to Mac OS

https://www.reddit.com/r/AsahiLinux/comments/1c2yasr/can_i_b...

When I was first getting my feet wet with Asahi I was using these methods a lot.


Thank you. This is helpful.

I learned that my option, for a well tested and functioning distro is pretty much Fedora Remix. So I guess I won't be able to use Elementary OS sadly. I hope Fedora is or can be made to look and behave like Elementary.


Thank you. I will try it.

Just one more question: is my mac hardware (and encrypted data) still protected the way it is protected before installing Asahi on it? Like device/theft protection etc.

My limited exploration/search on this topic kinda says in some way Asahi Linux lives inside the bounds of macOS (even all the data is available/readable in macOS which is fine by me). Is that so?


> Just one more question: is my mac hardware (and encrypted data) still protected the way it is protected before installing Asahi on it? Like device/theft protection etc.

Apple devices probably have the strongest security model offered by any otherwise open consumer device these days, so yeah: Installing Asahi won't degrade security of the macOS installation.

Note that the Linux install itself will have weaker protection, since e.g. the fingerprint sensor is not yet supported. I also think disk encryption would be much harder to set up due to Apple's boot process (if it's even feasible at all currently).


IIRC, there bunch of random things that still don't work -- no USB-C output, webcam, audio and if I've to guess suspend/resume is probably not rock solid either. The only benefit is that you get to use Linux, but then you may lose on actually getting work done without worrying about these issues. The new UI is inferior, but can still get things done.


This information is very dated. Webcam/audio work fine nowadays, and suspend/resume have never had issues that I recall. IME the feature support page is very accurate (no hidden gotchas like "technically it works but it breaks after sleep").

USB-C output is indeed not working but actively making progress (so actively that some of the related patches have been sent to the kernel mailing list and merged this very week).


I purchased a $3400 Linux laptop with excellent hardware and I've experienced the following issues:

Sound output is garbage. Webcam barely works or straight up doesn't work on some apps. The built in microphone doesn't work with common apps like Google hangouts and Zoom. All on Ubuntu (latest). Certain input ports (like USB C) don't work for certain apps even though they work fine for other Linux users and they work on my other computers.

Oh, and when I was on PopOS, the entire system froze and crashed nonstop (sometimes I would go over a day without a freeze, sometimes it would happen within 30 seconds of booting). This stopped happening for a while after I did a complete system reset, but then it started happening again. The team was completely unable to figure out the issue despite it being fairly widespread. No hardware was damaged or corrupted as they claimed must be the case.

Basically, in my experience, Linux has a ton of issues still.


Sorry to hear the laptop you chose has poor Linux support. I wouldn't buy a new system these days without first checking how well it will run Linux, even though in an ideal world that wouldn't be necessary.

Do note that the context of my comment was Asahi Linux though.


Wouldn’t call it “very” dated. Depending on your model seems like they were ready to cross off this March.

https://liliputing.com/asahi-linux-adds-microphone-support-f...


That’s exciting! USBC was definitely a dealbreaker for me but it’s great to see it’s no longer the case.


Webcam and audio both work now. I can't speak to how solid suspend/resume is because I haven't actually used it--I just follow the project--but I wouldn't necessarily assume it's flaky.


Asahi is only supporting M1, and partly M2 I believe. M3 was enough of a change that there are no drivers for it.


I run asahi on my M2 mini as the primary OS.

There's a bit of a pain in that I could only get Brave to run Netflix, but all that meant was that I switched to using Brave for all of my browsing.

There's no official Tor build for it, but there is an unofficial one (that I do not use)

The only real pain I have with it is that Facebook's javascript for its reels chews up RAM something horrible, which freezes the OS whilst being processed, and often causes me to reboot it


Framework?


I wanted to post this myself because I swear by my Framework 13 and it's my workhorse. However, it doesn't hold a candle to my wife's M3 Pro on a number of metrics mentioned here such as: Battery life, screen quality, and overall performance.

The Framework (Intel 12th Gen) also has the added benefit of heating the house, particularly with graphics "heavy" workloads (lots of windows open in GNOME Mutter, VMs, etc).


Framework is nice but it's far from Apple's laptop hardware quality. The biggest draw of the Framework is its modularity.


Based on my framework 13 and macbook m1, I think the only downgrade are the speakers and the trackpad. The keyboard is actually an upgrade, the 2.8k screen has a better size ratio but the contrast is not as good, I'd say it's decent. The trackpad performs well but it's the old hinged design and not haptic. Being able to service my own laptop, replace parts and max out the storage for less money than a mid-spec macbook is just unbelievable.


I wonder if providing a unibody aluminium "premium" option might help Framework capture the "build-quality" crowd. That combined with the improved keyboard may turn out to be a compelling offering. I think the main remaining challenge will be the display and trackpad.


this is a psychotic question but have you actually tried doing that? like using a macbook as a vessel for running linux under parallels as a primary use?


I went down this rabbit hole but with UTM, didn't get far though. Anything GPU-accelerated will struggle, or straight up not work. That includes GPU-accelerated terminals and code editors. You will also have conflicts with touchpad gestures, hot corners, and keybindings. It's not a good way to go imo.


I guess if you autostart the linux VM upon booting this should work. I am doing actually the same with BeOS but using a Linux as the hardware compatibility layer. Linux distro is configured to autologin to sway which starts a VM and run it fullscreen. The guest VM is configured to use all the laptop ram leaving only 1GB for the host. In the second virtual desktop the pulseaudio volume control, wifi and bluetooth management tools are automatically open so I can easily plug a BT headphone, switch network.

The linux distro automatically shutdown if I shutdown the VM. I am using swaylock to lock the screen when I am away.


I haven't done Linux (I have servers and such and "got used" to macOS enough for my needs) but I did in ages past do something very similar with Windows on Parallels on Intel Macs.


I have, and found the linux software ecosystem for arm lagged that of amd64.


> The battery life and power/efficiency of my m4 pro is insane.

They're coming. Look for AMD Strix Halo chips. They're in the comparably comfortable efficiency range.


> AMD Strix Halo chips

Do you happen to know any laptop that has a) equivalent screen quality (retina resolution), b) keyboard, c) trackpad but with full Linux support where all hardware pheripherals just work?


The ThinkPad X1 series usually have great linux support and you can option them with 2.8k@120Hz OLED panels, which at 14" lands between the Air and the 14" Pro in terms of PPI. I have a couple generations old X1 Yoga and all of the hardware worked out of the box with Manjaro and Debian, including the touchscreen and active stylus.

People usually buy them for the keyboards and trackpoint, but imo the touchpad is still pretty solid. It is a bit small on account of the trackpoint buttons taking up vertical real estate but its pretty responsive and multi-touch gestures work perfectly in my experience. I believe newer ones have larger trackpads than mine, though still not as large as a similarly sized mac.


I have a Gen 12 X1 and I'm very happy with it; huge step up over my previous Dell XPS, and all the hardware works great on the latest kernel.


Reminder that Thinkpad's makers, Lenovo, has shipped a laptop preloaded with the Superfish malware (https://easytechsolver.com/what-is-the-lenovo-controversy/)


This is true. However, Superfish hasn’t been relevant in years and Lenovo walked back on including such malware going forward as far as I can tell.

And furthermore, Superfish didn’t affect ThinkPads. Only lower end Lenovo models.


And surely didn't affect Linux installed on it, which is the topic of the thread.


Well, the highest resolution MacBook has less than 4K resolution and there's plenty of 4K laptops out there...

Most "business" centric laptops work great with Linux, as long as you use a well supported distro (Fedora, Ubuntu, Debian, openSuse). YMMV if you use other distros...


I think it’s debatable whether full 4K makes any sense on a 14” or 15” screen.


Regardless, the parent asked and it's a thing...


HP ZBook Ultra G1a? It has Strix Halo, 14" 2880x1800 (242 ppi) 120 Hz VRR OLED, and Ubuntu 24.04 options.

Can't speak for the keyboard, but HP ZBooks/EliteBooks tend to be decent.


I'm typing this post on the 395+ 128gb RAM model. IMO, the keyboard is better than the one in the newest Macbook Pro. Just enough travel, and quiet enough so I don't disturb co-workers when I type.

I use it for development running Fedora Workstation. My job involves spinning up lots of containers and K8S KIND clusters.

I often reach for it instead of my 14" M4 Macbook. However, I will choose the Macbook Pro when I know I'll be away from a charger for a while. The HP, as great as it is, still has bad battery life.


The only downside is that the webcam _does not work_ unless you use Ubuntu 20.04 w/ the OEM kernel package.

The ISP driver which will enable the camera to work is in the process of being up-streamed, though. I believe they're targeting early 2025 for mainline Linux support.


> early 2025

Is that a typo?

There’s barely 4 months left in 2025.


Oops, yeah. I meant early 2026


Do you feel a difference between Strix Halo and other x86 machines you could lay your hands on to date? I want one, but with an M2 Max macbook pro and Zen2 desktop it feels very hard to justify.


Subjectively, it’s a bit faster than my M2 Max, but a lot faster than my 3700x desktop.


I have this laptop with this display configuration and it looks amazing. However on Arch with Gnome/Wayland I cannot get color management to work, which is a problem since this display has such a wide gamut. Opening HN on it for the first time I was blinded by the deepest orange nav bar I could imagine.


The HP zbook g1a ultra is as close as you can get with Strix Halo. There are two screen options and the OLED one is high resolution. It's Ubuntu certified as well and can run LLMs nicely. The keyboard, trackpad, etc are all to notch. It's somewhere in between a mac pro and max.

I have one and love it but it's not close to my wife's mac on battery life.


I've yet to understand the point of OLED, if it sits at 400nits. All Apple's devices from iPhone to Studio Display are brighter, some of them are much much brighter even with OLED :/


Contrast and pixel response time. OLED PC monitors still look amazing even with low all-screen brightness.


Your best option is framework IMO.

The 2.8k panels are overall inferior to Apple's across a number of metrics, but they have a higher pixel density than the Air 13, (and has the S-tier aspect ratio of 3:2).

The FW13 keyboard is objectively pretty decent but not perfect, and is much much better than any keyboard Apple has made in the last decade, could be personal preference but apple has been making some pretty bad keyboards for a while now.

Trackpad on FW13 is OK, no one even comes close to Apple, but it's pretty decent, nothing upsetting if you're comparing it to any non-apple trackpads.

Framework has excellent linux suppport, all hardware bells and whistles generally work out of the box on every Linux distro, but Fedora, Ubuntu, and Bazzite are officially supported by Framework they QA against all three and work with maintainers to resolve issues and you can be totally confident that everything will just work. (At least work as well as it would on Windows!)

The other two downsides relative to a macbook are build quality and support. Although the FW13 is pretty solid in practice, I have dropped mine dozens of times and throw it in my bag and treat it overall rough and it has take on some dings and scratches but everything still works. But the frame is not very rigid, it flexes in lots of places, and it just does not feel as nice and solid as a macbook. And support can be hit-or-miss, like with any small manufacturer.


I think you’re talking about Apple’s butterfly keyboards which were only around for 3-4 years of the last decade you’re talking about. Apple’s keyboards have been great for 5+ years now.


Agreed. Only issue is that they wear down really fast. Your fingers sand them down at a mindblowing pace, and soon enough all of them are smooth, with most used keys having shiny blemishes on them


Yeah, the last few machines that I've owned have been from Apple and I've noticed that too.

It's a kind of confusing choice on their part to use cheaper plastic that does this so fast.


you’re smoothing your fingers wrong


I assumed that was grease.


... do you moisturise your hands? Because "sandpaper" shouldn't describe your fingertips


> retina resolution

That just means 3024x1964. With other laptops you can either go up a step to 4k or down to OLED 2880xsomething.


Unfortunately it also means a software stack that can properly scale everything for such a display. Windows and Linux both have... issues around UI scaling that make this kind of a pain.


I had a 1440p 14" Lenovo Thinkbook that ran Linux fine. Ryzen 5800u, maybe $400 seconhand IIRC.


Razer Blade is my windows laptop. The hardware is great, MacBook nice, but it needs the chip efficiency.


The performance seems to rival Apple's Pro / Max chips but the battery life can only do that for light workloads or videos.


I've asked this question very recently: https://news.ycombinator.com/item?id=44319903

Spoiler Alert: There really isn't anything that comes close to the macbook (even at 2x price).


Same. Just give me my M4 Macbook Pro, but Linux compatible.

I'm sure people will chime in and say framework, or other Linux-first vendors but they still make too many compromises.

Speakers suck, or the display sucks, or the microphones suck, or they get too hot, or they are too loud, or battery life sucks in comparison, or the chassis feels like cheap plastic and cracks and breaks easily.

There is no other laptop on the market that beats the Apple silicon macbooks right now.

I continue to tolerate macOS just for this hardware, and the rest of the OEMs seem to have zero intention at all to trying to catch up.


I'm hoping maybe the Qualcomm laptops make some progress on battery life. I had an LG gram that had honestly surprisingly good battery life on Linux, and maybe the ThinkPads are good too.


Well the Qualcomm SnapDragon chips literally compete on operations-per-watt. But it depends on what you need -- raw horsepower with a mostly tethered laptop or on-the-go freedom.


I found that I’m missing the netbook era. I need some 11inch laptop when I’m on the go for email, writing, and coding. For more focused task, I’m not giving up my 2 monitors setup and my mechanical keyboard so the computer form factor matters less.


That's ridiculous. Thinkpads, Zephyrus G14, Framework, they all have performance, build quality, screens, battery, etc, comparable to a Mac.


Do they? So far I haven’t found anything that matches battery life, build quality, or trackpad quality.


The G14 definitely matches in build or exceeds in build quality, keyboard, trackpad, speakers, and display. Battery life is shorter though. But it has a better GPU and supports Linux, which is way more important to me than an hour or two extra battery.

The Framework is also excellent, but with different compromises: that sweet display aspect ratio for instance, but no OLED.


also don't forget how quiet this thing is.


Try starlabs, best build quality I've ever seen after apple


what models have you used, specifically.


Have only owned the starbook mkvi (1 gen previous to current) since it came out 3years or so. Exhaustive research happily led me to discover it is the most Linux compatible laptop, even firmware updates can be performed natively from LVFS/fwudmgr. Knew it was the one when I learned in addition they are basically the only Linux laptop company that also produces their own hardware.

Screen I wish was brighter, and while I don't care, it doesn't have as many pixels as retina. The fan is not bad, but louder than a macbook. Battery I have not tested well, it is far away from battery life of a macbook. But the coreboot firmware allows me to set the charging speed (slower is better for battery) and the max charge level (which I keep at 60% when plugged in) Trackpad is great though, and keyboard is fantastic.

And all the parts are replaceable, as much as the Framework laptops. I don't know why they are not more popular.


>I would pay 2x the price of a macbook for a linux laptop with the same hardware quality.

Same, and I've been wanting this for 15 years now ...


Before their arm64 CPUs you could get a thinkpad or an xps and not have really bad FOMO. But now... it's just not even close :\


It's messed up TBH, the only laptops competitive on battery are Qualcomm which comes with a different set of sacrifices instead!


> I would pay 2x the price of a macbook for a linux laptop with the same hardware quality.

How about half the price?

Huawei are probably banned in the USA these days, however, the hardware quality is top notch and everything Linux works just fine out of the box. Not everything is perfect though, it all depends on what you want to do. If you are okay with integrated graphics (so no Blender or other 3D applications) but do need genuine Intel floating point single-thread performance, then give Huawei a go.

I have had plenty of Dell XPS, Lenovo things and much else over the years and all of them have poor thermal management and tend to creak if you use less than four hands to pick them up. The Huawei machines are in a different league.

As for battery life, I think you are right, but I am inanely loyal to genuine Intel and that means plugging in. I don't have problems with that.

People do get triggered by Huawei though, because the dreaded communists will steal your soul and brainwash you into hating the American way of life. So you might want to just cover up the badging lest anyone be offended. Ironically, a Huawei Matebook X Pro running linux is the laptop that is least likely to spy on you because the camera folds down into the keyboard.


I have a couple that work quite well with it, including a very nice 10” one - https://taoofmac.com/space/reviews/2025/05/15/2230

And I run a macOS-like GNOME theme that is pretty great.


This looks great, but not for the US market!

https://store.chuwi.com/products/corebook-x-i3-1220p?#descs


Based on past experience, I wouldn't buy chuwi hardware unless you're willing to treat it as disposable


Good to know... at that price it almost is. I just want a half way decent Linux laptop that isn't FHD or 5 years old. Carbons are more than I want to pay for something 'for fun'.

That's less expensive than the ASRock NUC BOX-225H I bought... and that was without RAM/NVMe.


Silverblue is very under-rated currently. I see it as a slightly more pragmatic immutable os.

NixOS i keep wanting to throw in the bin randomly but i have to admit that when it all works, it's kinda beautiful to own - you can harness a lot of power for comparatively little spent in mental tax


I’ve tried Silverblue but it’s far from Mac experience, on my PC it feels sluggish and bloated. Perhaps I’m too simple but I only need a vanilla Linux just like now dead Intel Clearlinux with linux brew and Flatpak


If you haven’t tried openSUSE Aeon, I’ve been really impressed with it.

BTRFS I find is a more elegant solution than OSTree for this use case, and it’s got a very minimal and polished happy path.

Silverblue covers way more use cases (not least multiple users), but the setup and secure boot encryption setup is very slick and macOS-esque philosophically.


I was using Pop!_OS and really loved it. Feature wise it would be an excellent replacement and I love the idea of running Linux.

However, one day when I tried to update the Nvidia driver it failed and when I tried to revert back I got a bunch of errors. My computer is foremost a tool to me and I don't particularly enjoy nor have time for stuff like fixing drivers.

Despite apple's flaws it gives me something that just works everyday.


Nvidia has always been a PITA on Linux, whether you're using the open source or proprietary drivers. Decent drivers, documentation, and support for their Linux community has always been somewhere between actively hostile against to barely an afterthought.

Go to any Linux distro subreddit right now and browse for people experiencing stability issues, random hanging, or no video on boot. Sometimes they don't mention it upfront but it almost always turns out they have an Nvidia card.

AMD and Intel GPUs have much better native open source support and (usually!) work out of the box without any effort.


Sometimes I wonder how the Linux distro landscape would have looked if there hadn't been a new distro for a new use case or design choice or disagreement among lead devs? Could we have allocated the resources better at battling with Wi-Fi not working, USB creaking on the turns? Or those would have stayed the way they are, because these mostly come or should come from the OEMs/vendors? Would this be better for them if the onus was not to make it work on hundreds (or is it thousands?) of flavours?


That was my hope, between Slackware 2.0 and around 2010, eventually I got Windows 7 and a laptop that supported virtualization, installed VMWare Workstation, and that was it, my zealot years were gone.

Still got an ASUS netbook though, a market killed by tablets and chrombooks.


Silverblue is great but regular Fedora is worth a look too if you don't want to deal with the teething issues of managing all your dev-tools with Silverblue's immutable setup, granted that was 2 years ago when i tried so thing's might be better now.

Infuriatingly; I have a macbook because a couple years ago I wanted a laptop that just worked while keeping my familiar tools but it really feels like Linux is trending up in polish and macOS on the down with an intersect possibly happening in a couple years.


That Apple would allow this development to happen without any reversal is astounding. If allowed to continue it could seriously damage their MacBook market share.

Then again, they may not care that much as long as they have the iPhone customer base.


Apple was once all about creating, lately it's all about consuming.

I expect the MacBook to be replaced by the iPad any second now.


In bluefin (silverblue based) they have brew preinstalled, which helps alot. Plus now its more mac-like.


Are you using Fedora on the Mac (via Asahi)?

Or are you using Fedora on an Intel/AMD laptop?


If it supported M4 I would be using it on my MacBook, but I am using a ThinkPad P14s gen 6 (AMD) right now. Some issues with suspend that I worked around with a kernel parameter but other than that, everything else worked out of the box


Thanks, I wasn't sure from your initial post


My ThinkPad P1 Gen 7 works absolutely fine. I get about 10 hours of battery life out of it. You can get it with Fedora preinstalled.

In my experience, ThinkPads generally work fine.


Calling gnome's UI better than macOS, even with Tahoe, is wild.


Gnome still does things way better than MacOS. Multiple desktops can be used with no animations. Built-in hot keys for applications without a 3rd party tool. Gnome extensions, a search that works for finding things (I know this is hard for Mac users to understand)


It was the straw that broke the camel’s back for me. After trying out the preview for a month, the writing was on the wall, and I began the process of switching to a Thinkpad with Linux. I am now fully off macOS for the first time in 20 years of being an Apple die hard. I could use a lot of emotionally loaded words to describe how I feel about this release, but the long and short of it is that I am no longer the target audience for Apple.


Similar story here. Loong time Apple fan, but as they say.. "trust arrives walking, but leaves on a horse". I'm real mad!

I installed tahoe in a virtualbuddy VM to see how it was before running on my main system... and.... I will be definitely be keeping Sequoia for a while (at least a year, probably).

If the situation does not improve in the meantime, I will probably switch to a framework laptop running cosmic desktop or something like that.


Yep I feel the same. To know if you're the target user or not I guess you go to Apple's marketing material.

Do you see anyone that looks like you, doing anything that looks like what you do? I don't and I can't remember the last time I did.

It seems to be a lifestyle brand now for people I have little in common with.

At least with Linux there's the possibility that you can make it your own even if it's not that way right now.


> Do you see anyone that looks like you, doing anything that looks like what you do? I don't and I can't remember the last time I did.

You mean you are not an ethnically interesting person with a head full of colored hair wearing ethically sourced pre-aged linen sipping responsibly grown coffee with a serene smile facetiming even more interesting people without any worry about any real problems in the world?!


In the same boat. After like 15 years I had enough. I've started de-Apple'ing my life in 2024. Still run M1 Pro Mac from work, which is great. 2 days ago I've finally ordered all the parts for a Linux PC, high spec. Not for gaming or so, just for compute. I'm soooo looking forward to the freedom that this will bring. The stuff that I already run on Linux, the distros are all great. I love Gnome for how it looks and KDE for how seamless it works. The new PC will let me tinker and try and hop and swap like I could never dream of for so many years.


Curious what kind of specs you've decided on, would you share? My mind always jumps to EPYC or Threadripper for this kind of use-case


Similar story here, but going from Windows to Linux. It seems like Linux is gaining some market share with the OS disasters from both Apple and Microsoft.


I have no where to go. I want to move away from iPhone, but Pixel is not available in my place, and Google doesn't seems to care about distribution. Nor does it do enough with its SoC development. There aren't anything come close on Laptop. And Windows or Linux aren't exactly in good shape either. I have no where to run. And I have wished for a third option for a very very long time.


You just move away from them on your computer. Just keep the iphone. It's a minor device. That's what I plan on doing. If I get fed up with my iphone, I also have nowhere to go. so will reduce usage. Sideloading gets more and more difficult everywhere.


Just run linux with utm!


I haven't worked with GTK, but what you are describing here sounds reminiscent of what we have been dealing with trying to build Godot bindings in Zig with a nice API. the project is in mid-flight, but Godot:

  - has tons of OOP concepts: classes, virtual methods, properties, signals, etc
  - a C API to work with all of those concepts, define your own objects, properties, and so on
  - manages the lifetimes of any engine objects (you can attach userdata to any of them)
  - a whole tree of reference counted objects
it's a huge headache trying to figure out how to tie it into Zig idioms in a way that is an optimal API (specifically, dealing with lifetimes). we've come pretty far, but I am wondering if you have any additional insights or code snippets I should look at.

working on this problem produced this library, which I am not proud of: https://github.com/gdzig/oopz

here's a snippet that kind of demonstrates the state of the API at the moment: https://github.com/gdzig/gdzig/blob/master/example/src/Signa...

also.. now I want to attempt to write a Ghostty frontend as a Godot extension


Hopefully it's improved, but the last time I wrote a GTK binding for a language, it was miserable. 98% of it was sane, but the remaining 2% had things like "whether or not this function takes a reference to this object depends on the other parameters passed" which made liveness analysis "interesting."


This is what people mean when they say "make invalid states unrepresentable", languages with union types can easily avoid this problem.


It doesn't sound like they are talking about invalid states, more like they are taking about the kind of thing that in Rust would be represented by `Option<Box<dyn SomeTrait>>` or suchlike. Maybe your point is that in Rust much less ceremony is necessary to avoid hitting a null pointer when doing this. But still, in either language it's easy to end up with hard to follow logic when doing this.


not super familiar with Rust but isn't Option<T> just an union type of null and T? I get the language has special semantics for this compared to a union type but it is conceptionally just an union.

For example this is something you can do with typescript.

  function(args: Arguments) { ... }

  type Arguments = { a: number, b: number } | { a: number, b: string, c: number }
the Arguments { a: 1, b: 1, c: 1 } is not representable.


> not super familiar with Rust but isn't Option<T> just an union type of null and T?

Only if there is a niche optimization happens if T is never null, otherwise it's a tagged union.

That's not what you're replying to is about.


I'm pretty sure they're talking about reference counting that depends on the arguments, not about optional arguments or invalid argument combinations.


Thx for the work you're doing! Just out of curiosity, I sometimes struggle to write performant C# Godot code because it's hard to interface with the engine without doing a lot of back and forth conversions to engine types. You end up doing a lot of allocations. Did you run into that kind of stuff while creating your bindings?


My understanding may be out of date, but the C# support was created before GDExtension existed. The team has been working hard on porting it over to GDExtension. Once they are done, it should be much more performant, and they will finally be able to ship only one version of the editor. I believe the original C# bindings do a lot of unnecessary marshaling at the ABI.

With GDExtension, the core builtin types like `Vector3` are passed by value. Avoid unnecessarily wrapping them in Variant, a specialized tagged union, where you can. You can see the documentation here; you have direct access to the float fields: https://gdzig.github.io/gdzig/#gdzig.builtin.vector3.Vector3

Engine objects are passed around as opaque pointers into engine managed memory. The memory for your custom types is managed by you. You allocate the engine object and essentially attach userdata to it, tagged with a unique token provided to your extension. You can see the header functions that manage this: https://github.com/godotengine/godot/blob/e67074d0abe9b69f3d...

But, this is how the lifetimes for the objects gets slightly hairy (for us, the people creating the language bindings). Our goal with the Zig bindings is to make the allocations and lifetimes extremely obvious, a la Zig's philosophy of "No hidden memory allocations". It is proving somewhat challenging, but I think we can get there.

There's still a lot of surprising or unintuitive allocations that can happen when calling into Godot, but we hope to expose those. My current idea is to accept a `GodotAllocator` on those functions (and do nothing with it; just use it to signal the allocation). You can read the code for the `GodotAllocator` implementation: https://github.com/gdzig/gdzig/blob/master/gdzig/heap.zig#L8...

If we succeed, I think Zig can become the best language to write highly performant Godot extensions in.


Another thing it does badly and I am not sure if they are sorting out with the new design, is the magic methods crap.

They had to copy that bad idea from Unity, where methods are named in a specific way and then extracted via reflection.

Either provide specific interfaces that components have to implement, use attributes, or make use of generics with type constraints.

Maybe for Unity that made sense as they started with scripting languages, and then bolted on Mono on the side, but it never made sense to me.


> extracted via reflection

I think you are talking about dispatch of virtual methods, which is still a thing, but the performance cost can be somewhat mitigated.

the names of the methods are interned strings (called `StringName`). a naive implementation will allocate the `StringName`, but you can avoid the allocation with a static lifetime string. we expose a helper for comptime strings in Zig[0].

then, extension classes need to provide callback(s)[1] on registration that lookup and call the virtual methods. as far as I know, the lookup happens once, and the engine stores the function pointer, but I haven't confirmed this yet. it would be unfortunate if not.

at least right now in GDZig, though this might change, we use Zig comptime to generate a unique function pointer for every virtual function on a class[2]. this implementation was by the original `godot-zig` author (we are a fork). in theory we could inline the underlying function here with `@call(.always_inline)` and avoid an extra layer indirection, among other optimizations. it is an area I am still figuring out the best approach for

virtual methods are only ever necessary when interfacing with the editor, GDScript, or other extensions. don't pay the cost of a virtual method when you can just use a plain method call without the overhead.

[0]: https://gdzig.github.io/gdzig/#gdzig.builtin.string_name.Str...

[1]: https://github.com/godotengine/godot/blob/e67074d0abe9b69f3d...

[2]: https://github.com/gdzig/gdzig/blob/5abe02aa046162d31ed5c52f...


That and the exposition of _Something methods in C#, which don't follow C# conventions, private/protected methods have keywords for that.

Thanks for the overview.


I did not know this was a project that was in progress, and its quite exciting. I love Godot, and am quite fond a Zig as well. I'll be keeping my eye on this.


here, I'll copy the first paragraph of TFA for you:

> Many software projects take a long time to compile. Sometimes that’s just due to the sheer amount of code, like in the LLVM project. But often a build is slower than it should be for dumb, fixable reasons.


it can narrow the payload: https://zigbin.io/7cb79d

I think the post would be more helpful if it had a concrete use case. let's say a contrived bytecode VM:

  dispatch: switch (instruction) {
      inline .load, .load0, .load1, .load2, .load3 => |_, tag| {
          const slot = switch (tag) {
              .load => self.read(u8),
              else => @intFromEnum(tag) - @intFromEnum(.load0),
          };
          self.push(self.locals[slot]);
          continue :dispatch self.read(Instruction);
      },
      // ...
  }
"because comptime", this is effectively the same runtime performance as the common:

  dispatch: switch (instruction) {
      .load => {
          self.push(self.locals[self.read(u8)]);
          continue :dispatch self.read(Instruction);
      },
      .load0 => {
          self.push(self.locals[0]);
          continue :dispatch self.read(Instruction);
      },
      .load1 => {
          self.push(self.locals[1]);
          continue :dispatch self.read(Instruction);
      },
      .load2 => {
          self.push(self.locals[2]);
          continue :dispatch self.read(Instruction);
      },
      .load3 => {
          self.push(self.locals[3]);
          continue :dispatch self.read(Instruction);
      },
      // ...
  }
and this is in a situation where this level of performance optimization is actually valuable to spend time on. it's nice that Zig lets you achieve it while reusing the logic.


I did not realize you could inline anything other than an `else` branch! This is a very cool use for that.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You