For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | jasoneckert's commentsregister

This is great, thanks! It sort of feels like browsing for gems in a used bookstore and stumbling onto authentic, personal writing. I'm always up for that, and plan on spending plenty of time exploring the list.

I’ve submitted mine as well - cheers!


>It sort of feels like browsing for gems in a used bookstore and stumbling onto authentic, personal writing

I don't know that I've heard a better description of the thing the so-called small web is about than that. It's the clearest answer to the "why" of having a small web of discoverable personal blogs and sites.


That is such a lovely way to put it. Do you mind if I add it to the about page and link to this comment?

Of course I don't mind - please do! :-)

It's a nice animation, but for such a significant anniversary - and from a company like Apple - I expected a lot more hoopla and content. This could indicate that there wasn't a lot of planning involved, that it wasn't a high-priority item, or that Apple had enough people with time to focus on it.

It's almost as if someone near the end of a meeting said "Oh crud, we've got to do something to acknowledge our 50th anniversary - can someone put something together, and quick?"


Seems to me that they are simple saying its not important:

"At 50 years, it’s only natural to look back. But Apple has always looked forward, building tools and delivering experiences that enrich people’s lives. As we celebrate how far we’ve come, we’re inspired by where we’ll go — together."

And, no, I don't think they left it to chance.

Also there's an art video to go with the art animation.


Apple's acknowledged their anniversary for weeks, especially during the recent product launches like the Neo. It started back in mid March [0].

[0] https://www.apple.com/newsroom/2026/03/apple-to-celebrate-50...


They had a bunch of artists playing live at some Apple Stores, they even got Paul McCartney doing a concert at their HQ.

It's wild to me that at 83 years of age, he still wants to do gigs like this. Pretty sure he doesn't need the money.

It's not hard for me to imagine that performing as one of the world's most beloved rock stars, doing what you've loved for many decades, is an enjoyable way to spend your time, regardless of the paycheck.

Wow. Few more million in the bank for Macca.

The Twentieth Anniversary Macintosh came to be regarded as such a mistake and quintessential example of how misguided Apple was during the wilderness era that I'm not surprised they went in the opposite direction. Institutional memory etc etc

Agreed

a Jobsian decision

For me, drawing the line as to when you will leverage AI and when you won't comes down to a quote from Kurt Vonnegut: "Practicing an art, no matter how well or badly, is a way to make your soul grow, for heaven's sake. Sing in the shower. Dance to the radio. Tell stories. Write a poem to a friend, even a lousy poem. Do it as well as you possibly can. You will get an enormous reward. You will have created something."

Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/


This is the second time today that I come across this quote. I’m always happy to see Vonnegut in the wild. Let alone two times in a day!

As someone who came from the SGI O2/Octane era when high-end workstations were compact, distinctive, and sexy, I’ve never really understood the allure of the Mac Pro, with the exception of the 2013 Mac Pro tube, which I owned (small footprint, quiet, and powerful).

For me, aesthetics and size are important. That workstation on your desk should justify its presence, not just exist as some hulking box.

When Apple released the Mac Studio, it made perfect sense from a form-factor point-of-view. The internal expansion slots in the M2 Mac Pro didn't make any sense. It was like a bag of potato chips - mostly air. And far too big and ugly to be part of my work area! I'm surprised that Apple didn't discontinue it sooner.


As much as I love alluring designs such as the NeXT Cube (which I have), the Power Mac G4 Cube (which I wish I had), and the 2013 Mac Pro (which I also have), sometimes a person needs a big, hulking box of computational power with room for internal expansion, and from the first Quadra tower in the early 1990s until the 2012 Mac Pro was discontinued, and again from 2019 until today, Apple delivered this.

Even so, the ARM Mac Pro felt more like a halo car rather than a workhorse. The ARM Mac Pro may have been more compelling had it supported GPUs. Without this support, the price premium of the Mac Pro over the Mac Studio was too great to justify purchasing the Pro for many people, unless they absolutely needed internal expansion.

I’d love a user-upgradable Mac like my 2013 Mac Pro, but it’s clear that Apple has long moved on with its ARM Macs. I’ve moved on to the PC ecosystem. On one hand ARM Macs are quite powerful and energy-efficient, but on the other hand they’re very expensive for non-base RAM and storage configurations, though with today’s crazy prices for DDR5 RAM and NVMe SSDs, Apple’s prices for upgrades don’t look that bad by comparison.


> sometimes a person needs a big, hulking box of computational power with room for internal expansion

Between cloud computing and server racks, is this still a real niche?


Yes: A prominent examples is for those that don’t know how make servers and do a lot of video editing that want high disk speeds and processing power.

As someone who worked on the M2 Mac Pro and has a real soft spot for it, I get it. It’s horrendously expensive and doesn’t offer much benefit over a Mac Studio and a thunderbolt pci chassis. My personal dream is that vms would support pci pass through and so you can just spin up a Linux vm and let it drive the gpus. But at that point, why are you buying a Mac?

Opinions are my own obvs.


> My personal dream is that vms would support pci pass through and so you can just spin up a Linux vm and let it drive the gpus.

SR-IOV is just that? and is well supported by both Windows and Linux.


Yes- that's what I referring to. Basically the virtualization framework supporting handing a specific PCIe device off to a VM. Link management is still handled by macOS but the actual PCIe packets are handled by the VM (which could be windows or linux, which would have a GPU driver)

Under a comment regarding the O2/Octane (both of which I own :) era, I first read “vms” as VMS, not multiple instances of a VM…

> Opinions are my own obvs.

Whose else would they be?


> as someone who worked on the m2 mac pro

They're trying to make it very clear they're not speaking on behalf of Apple Inc, despite having worked (or working) there.

Big companies like to give employees some minimal "media training", which mostly amounts to "do not speak for the company, do not say anything that might even slightly sound like you're speaking for the company".


An employer's, especially as they stated having worked (and perhaps still) at Apple in the same comment.

Oh, I interpreted it as “did work using a Mac Pro” vs helped develop the Mac Pro itself.

  > > Opinions are my own obvs.

  > Whose else would they be?
On the internet? Often the opinions of others they see getting upvotes.

>> Opinions are my own obvs.

> Whose else would they be?

takes a look at the user profile

Oh, they are a journalist/writer for a big name outfit


Maybe he was trying to say he isn’t a spokesman for anyone else :-)

do / did you have to always work in the office or do you get to work from home by taking a test rig with you ? always been curious about this

Hardware generally isn't allowed outside of lockdowns. There are things you can do with dev fused hardware to remotely control it which make life easier. But most devs just come into the office since being there in person is nicer.

yeah that makes sense. thanks for replying!

I'm surprised they even tried selling an Apple Silicon Mac Pro - I expected that product to die the moment they announced the transition. Everything that makes Apple Silicon great also makes it garbage for high-performance workstations.

The allure of the Mac Pro is that you could dodge the Apple Tax by loading it up with RAM and compute accelerators Apple couldn't mark up. Well, Apple Silicon works against all of that. The hardware fabric and PCIe controller specifically prohibit mapping PCIe device memory as memory[0], which means no GPU driver ever will work with it. Not even in Asahi Linux. And the RAM is soldered in for performance. An Ultra class chip has like 16 memory channels, which even in a 1-DIMM per channel routing would have trace lengths long enough to bottleneck operating frequency.

The only thing the socketed RAM Mac Pros could legitimately do that wasn't a way to circumvent Apple's pricing structure was take terabytes of memory - something that requires special memory types that Apple's memory controller IP likely does not support. Intel put in the engineering for it in Xeon and Apple got it for free before jumping ship.

Even then, all of this has gone completely backwards. Commodity DRAM is insanely expensive now and Apple's royalty-bearing RAM prices are actually reasonable in comparison. So there's no benefit to modularity anymore. Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.

[0] In violation of ARM spec, even!


> An Ultra class chip has like 16 memory channels, which even in a 1-DIMM per channel routing would have trace lengths long enough to bottleneck operating frequency.

CAMM fixes this, right?

> Actually, it's a detriment, because price-discovery-enforcing scalpers can rip RAM out of perfectly working computers and resell the RAM. It's way harder to scalp RAM that's soldered on the board.

Scalping isn't a thing unless you were selling below the market price to begin with which, even with the higher prices, Apple isn't doing and would have no real reason to do.

Notice that in real life it only really happens with concert tickets and that's because of scam sandwich that is Ticketmaster.


Ticketmaster is a reputation management company. Their true purpose is to take the reputation hit for charging market value for limited availability event tickets. Artists do not want to take this reputation hit themselves because it impacts their brand too much.

Which is why it's quite appropriate for their reputation to be absolute shit and for members of the public to make sure the stink spreads to anyone who chooses to do business with them as a disincentive to doing it.

Ticketmaster is owned by Live Nation which owns at least 338 major concert venues [1]. Their market power in the venue business allows them to force artists to use Ticketmaster for ticket sales. The artists don't mind though, as they can tell their fans they have no other choice but to use Ticketmaster. Ticketmaster absorbs all of the reputational stink and the artists likely earn more money than they otherwise would have if they were forced to sell tickets at the low prices their fans want.

[1] https://en.wikipedia.org/wiki/Live_Nation_Entertainment


Except that they don't absorb all of the reputational stink because "Live Nation owns at least 338 major concert venues" is clearly a BS excuse when there are more than 10,000 concert venues in the US, and then the fans still blame the artists for using Ticketmaster.

> aesthetics and size are important

It's dumb from a practical perspective. But I keep hoping they'll vertically compress their trashcan design so it looks like their Cupertino headquarters.


The very definition of a halo product.

>That workstation on your desk should justify its presence

It does the work you want it to do is not enough to justify its presence ?


Parent comment OP has to be trolling

> That workstation on your desk should

Under your desk, right? Right?!


It’s a desktop computer, not a deskbottom computer.

I have a sit/stand desk so mine's on top, it makes organising the cables much easier.

Nothing as swish looking as a Mac Pro though, it's a plain black Lian Li behemoth from the late 00s.


I also have a standing desk, and my desktop computer is still on the floor. That way I can just route all the cables to the back and then under the desk to my PC. Looks very clean as well.

Yep, with wireless keyboards and mice you really only need your monitor cables on the desk in this setup.

Yep, same.

If it is really a behemoth, how does you stand desk hold it up?

I have a Lian Li anniversary edition snail case and I don’t think any moveable desk could hold it.


So it might not be quite as bulky as one of those, pretty sure this is it.

https://www.scan.co.uk/products/lian-li-pc-60fnb-black-alumi...

My desk can probably lift me though.

https://files.catbox.moe/r3fqqv.jpg

Case looks small here but it's a 42" monitor for scale.


It'd get mighty dusty under there after awhile, best to keep it where you can see it so it doesn't get into trouble.

I mean... if you spent $7,000 on it, do you really want to hide it away under the desk?

Yes, because you’re buying a tool, not a conversation piece.

Por que no los dos? ¯\(ツ)/¯

That’s why my Lian Li anniversary edition is next to my desk. Also because it is nearly as tall and wouldn’t fit under it.

But I think the Mac Pro was never really trying to be on your desk in the first place. For a lot of its target users, it lived under the desk or in a rack, and the size wasn't about aesthetics so much as airflow, expansion, and serviceability

i wish i'd never traded in my 2016 mac pro (aluminum polished tube) as it was beefy, it was silent, clever thermo design (like the powerpc cube 20 years earlier or so), and i'd upgraded the living crap out of it for cheap.

I hope this is satire.

This was exactly my first thought when I saw the title. And after reading the contents of the blog, it's pretty clear that ARM is laser focused on getting a piece of their customer's cake by competing with them. This is likely why they are riding the AI hype train hard with their ill-suited name (AGI).

Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.


In short, this reads like a mix of valid historical pain points and outdated assumptions.

The post frames Wayland security as “you can’t do anything,” but that’s a misunderstanding. Even under X11, any app can log keystrokes, read window contents, and inject input into other apps. Wayland flips this to isolation-by-default: explicit portals/APIs for screen capture, input, etc.

Moreover, the performance argument is weak and somewhat contradictory. The author claims there is no clear performance win, and that it's sometimes slower and hardware improvements make it irrelevant. But Wayland reduces copies and avoids X11 roundtrips (architectural win). Actual performance depends heavily on compositor + drivers, and I've found that modern hardware has HUGE performance improvements (especially Intel, AMD, and Apple Silicon via the Asahi driver).

The NVIDIA argument is also dated. Sure, support was historically bad due to EGLStreams vs GBM, but this has improved significantly in recent driver releases.

Many cited issues are outdated too. OBS, clipboard, and screen sharing issues are now mostly (if not entirely) solved in the latest GNOME/KDE.

I've been using Wayland exclusively on Fedora and Fedora Asahi Remix systems for many years alongside Sway (and occasionally GNOME and KDE). Adoption has accelerated in many distros, and XWayland for legacy apps is excellent (although I believe using the word "legacy" here would be a trigger word for the author ;-).

There's no stagnation here... what we're looking at is a slow migration of a foundational layer, which historically always takes a decade or more in the Linux world.


> Actual performance depends heavily on compositor + drivers, and I've found that modern hardware has HUGE performance improvements (especially Intel, AMD, and Apple Silicon via the Asahi driver)

Author’s argument is those hardware improvements could have been had for free with X11 upgrades. I’m not saying it’s a complete argument. But talking about architectural wins sounds like conceding the argument.


> Author’s argument is those hardware improvements could have been had for free with X11 upgrades.

I do NOT miss having tearing all the time with X11. There were always kludgy workarounds. Even if you stopped and said ok, lets not run nvidia, let's do intel they have great FOSS driver support, we look back at X11 2D acceleration history. EXA, SNA, UMA, XAA? Oh right all replaced with GLAMOR, OK run modesetting driver, right need a compositor on top of our window manager still because we don't vsync without it.

Do you have monitors with a different refresh rate? Do you have muxes with different cards driving different outputs? All this stuff X11 sucks at. Ok the turd has been polished well now after decades, it doesn't need to run as root/suid anymore, doesn't listen for connections on your network, but the security model still sucks compared to wayland, and once you mix multiple video cards all bets are off.

But yeah, clipboard works reliably, big W for X11.


It reads like a user that tried Wayland again last week, found the same issues and wrote a piece that tried to summarize why they remain sad after 17 years of waiting for Wayland to address its issues.


There is no "Wayland" to address these issues. It's like asking "web" to address its issues.

Wayland is a protocol with multiple different implementations.


But this is sort of the nature of the problem?

In X11, the problem was Xserver. Now, X11's design philosophy was hopelessly broken and needed to be replaced, but it wasn't replaced. As you correctly point out, there is no "Wayland", Wayland is a methodology, a description, of how one might implement the technologies necessary to replace X11.

This has led to hopeless fracturing and replication of effort. Every WM is forced to become an entire compositor and partial desktop environment, which they inevitably fail at. In turn application developers cannot rely on protocol extensions which represent necessary desktop program behavior being available or working consistently.

This manifests in users feeling the ecosystem is forever broken, because for them, on their machine, some part of it is.

There is no longer one central broken component to be fixed. There are hundreds of scattered, slightly broken components.


I maintain Red Hat backed it as part of a play to make it harder to develop competing distros that aren’t basically identical to Red Hat’s product.

Their actions on systemd, Wayland, plus gnome and associated tech, sure look like classic “fire and motion”. Everyone else has to play catch-up, and they steer enough incompatible-with-alternatives default choices that it’s a ton of work and may involve serious compromises to resist just doing whatever they do.


I miss the Unix philosophy


Wayland is far more aligned with the Unix philosophy than Xorg ever was. Xorg was a giant, monolithic, do everything app.

The Unix philosophy is fragmentation into tiny pieces, each doing one thing and hoping everyone else conforms to the same interfaces. Piping commands between processes and hoping for the best. That's exactly how Wayland works, although not in plain text because that would be a step too far even for Wayland.

Some stuff should not follow the Unix philosophy, PID 1 and the compositor are chief examples of things that should not. It is better to have everything centralized for these processes.


In X you have server, window manager, compositing manager, and clients and all is scoupled by a very flexible protocol. This seems nicely split and aligned with Unix philosophy to me. It also works very well, so I do not think this should be monolithic.


This is quite wrong? There are some features that get blocked from being implemented because Wayland refused to define a protocol for everyone to implement. Window positioning being a recent example of how progress can get blocked for many years due to Wayland.


This is same cop out people use to talk about "Linux."

"No, Linux isn't bad, your distro/DE is bad, if you used XYZ then you wouldn't have this problem." And then you waste your time switching to XYZ and you just find new problems in XYZ that you didn't have in your original distro.

I'm genuinely tired of this in the Linux community. You can't use the "Wayland" label only for the good stuff like "Wayland is good for security!" and "Wayland is the future" and then every time someone complains about Wayland, it is "no, that's not true Wayland, because Wayland isn't real."


But that's what we signed up for in the Linux wirld. Linux systems are smorgasbord of different components by design, and that means being specific. I'm using KDE Plasma 6, that's a different experience than someone using Cosmic or Sway.


Furthermore, Wayland is, first and foremost, a protocol, not a standalone software like the Linux kernel. Wayland is no more than an API format transmitted over the Wire protocol. So properly criticizing Wayland is about criticizing the abstraction this API creates and the constraints introduced by it.


Could you briefly explain in simple terms, why I as a user would care about any of that? I want stuff to work. With Wayland, it largely doesn't. I don't terribly care about the semantics of it.


> Wayland flips this to isolation-by-default: explicit portals/APIs for screen capture, input, etc.

The problem is old (and even not so old) apps don't expose those APIs so interactions like UI automation on Wayland is limited, if not impossible. I'd love to grant a specific permission just for selected GUI apps, but I can't because they don't support it.

There's a reason why RPA software on Wayland is limited to web apps inside a browser. Or something extremely janky like taking screenshots of the entire desktop and doing OCR. But then you can't interact with unfocused apps.


In my experience I have found the xdg-desktop-portal for whatever reason to be completely non functional on Arch/Hyprland. It must be an issue with my config but on x11 I never had to think about this


This reads like AI/FSD-bro speak: "no, that's all old news, you clearly haven't tried the new cutting edge model/build bro! it's all fixed now!".

> Wayland security

Okay, that's great, but why would I care? If you can implement those security wins transparently in the background, cool. Otherwise, what I care about is being able to take a screenshot, not about some theoretical "security threat" from already vetted programs I run on my machine.

> OBS, clipboard, and screen sharing issues are now mostly (if not entirely) solved in the latest GNOME/KDE.

Oh, the clipboard works mostly correctly now, after some 17 years of development? Could not have come up with a more damning statement. Complete misalignment of priorities.


  > "no, that's all old news, you clearly haven't tried the new cutting edge model/build bro! it's all fixed now!"
Exactly. And it's standard rhetoric for the wayland fanboys. "The fix for this was committed 15 minutes ago! You just need to check out the unstable branch and recompile!"

  > what I care about is being able to take a screenshot, not about some theoretical "security threat" from already vetted programs I run on my machine.
Yeah, the security theatre thing is also part of their standard rhetoric. It's a good bit of rhetoric because it scares people who don't know better. They all love to talk about how it's just so insecure to allow us to do things that every desktop environment has been able to do for 30+ years.

But strangely, in decades, I've never seen a single example of anyone taking advantage of this horrible security design and it becoming a widespread problem in the wild. I keep asking the wayland bros to give me an example of this happening in the wild and causing a problem that's even mildly widespread. Strangely when I ask that question they always seem to forget to respond to that part of my post and move on to their next piece of standard rhetoric.

  > Oh, the clipboard works mostly correctly now, after some 17 years of development? Could not have come up with a more damning statement. Complete misalignment of priorities
Tsk tsk, now you're just being cynical. We should be celebrating that wayland has managed to kinda-sorta get a feature working which was working just fine in X11 by ~1998, and which worked just fine in Windows <3.1, and which worked just fine in Mac OS in the 1980s. And they've managed to do it in only ~3 years longer than it took to get Duke Nukem Forever into stores! Yay them!


I actually do something similar on my personal site using this note that includes a purposeful typo: https://jasoneckert.github.io/site/about-this-site/

I'm hoping people catch that typo after reading "every single word, phrase, and typo (purposeful or not)" and smiled every time I've had someone post a PR with a fix for it (that I subsequently reject ;-)


God that machine was terrible - underpowered and undercooled, which led to frequent overheating and component failures. When I first started at Sun, they put one of those on my desk as a joke on my first day (it was quickly replaced so that I could get some real work done).


At work in the 90s we gave tons of old Sparcstation 10s away. They rapidly replaced all IPX and IPS at the computer clubs around Sweden. One Volvo was destined for Luleå and was really weighted down with a trunk full of pizza boxes.


Yeah it was a real piece of junk, but I guess there's no accounting for nostalgia. People also like to restore the SGI Indy, easily the worst machine that SGI ever shipped.

At one point decades ago there were a lot of these IPXs and their SCSI accessories on eBay and they were a decent source of project boxes because you could use the power supply and stick your project where the hard drive was supposed to be, with the wires coming out the SCSI port. It looks like the model 411 is still $30 or so on eBay but there are few.


The Indy was awesome. One client had 400 of them, as long as you didn’t take the lowest RAM entry level model they were excellent. Hardware was reliable, graphical desktop better than MacOS today, and very low support burden.


So true. Keep in mind, OP said it was the worst machine SGI shipped, not the worst machine Sun shipped. SGI's worst machine could be fixed by adding some RAM. Sun's worst machines were completely unsalvageable.


Hey, don't trash talk Indy like that. It has.. well, it is Web! and has VRML.. and it's your only option for N64 devkit. So, there's that. Overall you're right though. Entry level machine. I have one in working order, rarely has use next to Indigo2 MAX impact. I do have one Sparc, haven't been booted in ages. I have to check whether it's IPX or Classic. I'm even afraid to boot it up.


> People also like to restore the SGI Indy

Because the Indy (and O2) are actually attainable. Indigo2, Octane2, Tezro cost 2-3x minimum. Sometimes a Personal IRIS comes up for relatively cheap though.


I managed a lab of them. I _hated them_. They were unreliable, slow, and just absolutely miserable because they created endless complaints.

We were rolling out labs of Windows machines. Except for the lack of terminal, they were better on every single axis for the common university lab use cases - mostly netscape/mosaic and applications..

I also managed NeXT slabs and cubes; they were vastly better than the sun boxes because we had installed HDDs in the cubes and extra memory. The only problem with them was the absolutely terrible, shit behavior when users accidentally browsed the AFS root...

The only positive thing I can say about those Sun boxes is that _one_ behavior was better than NeXT. With NeXT, students would pull the power on them after wating four or five minutes of the beachball due to AFS I/O.


A younger person who only knows the comparative merits of Windows, macOS, and Linux in this decade probably cannot imagine the relief felt by people when they were finally able to move their technical applications off unix boxes onto Windows NT workstations. The situation was so bad, the computers cost so much and worked so poorly, a Dell with a Pentium Pro was like a miracle, at the time.


Only some people who were around at that time welcomed Windows NT; others decried the various failings of Microsoft…


I don't have any nostalgia for old machines, I understand the 5- or 6-figure price tags were ridiculous, but I'm curious - in what way did Unix machines back then work poorly?


Windows on a 80486 vs. those boxes felt very much like if you were to compare the latest M5 Macs to, say, a ppc 604 Mac.

No comparison at all. Just every single interactive aspect of them was worse in every possible way and that includes I/O performance. At the time, in that era, people would babble about how much faster SCSI was, but the disks sitting in PCs were blazing fast in practice despite being attached by glorified joystick ports.


For the price, these Sun workstations were slow as hell to me. X was horribly laggy. The UI put me off Unix GUIs for a decade. The mouse was meh.

I love the industrial design of these pizza boxes, though. I didn't mind when I was running them headless as IRC servers or web hosts.


They were kind of fast and more fun by the time you got to the Sunblade 1500 running Gnome desktop.

But yeah, complete white elephants at that point. Too little too late.


I always really enjoyed hp-ux with VUE in the early 90s. It was way ahead of windows (especially before 95 was out!) and fast.

Motif was hell to develop for though.


There is an irony that Wine is the most stable linux ABI for GUI applications in 2026.


That means nothing when everything it's either RHEL bound, Ubuntu LTS or docker containers among standalone services written in Go which are everywhere.

Serious GUI software will be written in QT5/6 where the jump wasn't as bad as qt4->5. Portability matters and now even more. Software will run in any OS and several times faster than Electron.


I remember a lab with diskless systems where your disk quota was smaller than the kernel panic dump. So basically if you crashed a machine your account was instantly filled up and basically nothing would work. I believe it affected mail as well. Fun times.


Classic day 1 hazing, the Wimp Lo: https://www.youtube.com/watch?v=d696t3yALAY



Totally terrible. ONe place I worked we all had sparcs and the first thing that happened whenever anyone left is there would be this mad shuffle where everyone nicked everyone else's computer with the IPX being the prize for whoever wasn't there at the time or the new joiner. So I had the IPX for a while, even just using it as an x client for a remote build server it was horrible.


Certain companies are well-known for their legal teams. Qualcomm is one (often described as a legal company that employs some engineers). Nintendo is the other.

As a result, Nintendo's legal team is far more likely to ensure they get refunded, and quickly. They could provide a template for others to follow.


Hmm, I would have thought of Oracle first.

(EDIT: I just mean as a litigious company, well-known for its legal team.)


Trump is too useful to Ellison right now, he isn't going to derail that gravy train over a few tens of millions of dollars.


when I worked there, I described it as a navy of lawyers with a dinghy of engineers.


macOS is a capable UNIX, but it's not Linux - which has since become the standard platform for most cloud/web/ML development.

As a developer myself who uses Fedora Asahi Remix as my daily driver, I can also tell you that Linux runs 2x faster (often much more) for everything compared to macOS - on the same hardware! And that performance gain is also important for my work :-)


Totally, I have a mini forums pc that runs void linux that I ssh into from my MB air. My worry is hitching your wagon to a project that could stop working one day through no fault of the devs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You