For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | tempest_'s commentsregister

Has apple been a serious development platform in the last 20 years?

I know a lot of devs like apple hardware because it is premium but OSX has always been "almost linux" controlled by a company that cares more about itunes then it does the people using their hardware to develop.


At least 9 out of every 10 software engineers I know does all their development on a mac. Because this sample is from my experience, it’s skewed to startups and tech companies. For sure, lots of devs outside those areas, but tech companies are a big chunk of the world’s developers.

So yea I would say Apple is a “serious development platform” just given how much it dominates software development in the tech sector in the US.


I have the feeling a lot of people take Macs because the other option is a locked down Windows, and Linux is not offered.

This. I ran Linux at work until last year, when it was finally disallowed. I went with locked-down Mac over locked-down Windows.

The hardware for a Linux laptop right now is not great. Especially for an arm64 machine. Even if the hardware is good the chassis and everything else is typically plastic and shitty.

That is a surprising sentiment. Most dell and Lenovo laptops work just fine and are usually of reasonably good build quality (non-plastic chassis etc.).

arm64 is however mostly bad. The only real contender for Linux laptops (outside of asahi) was Snapdragon's chips but the HW support there was lacking iirc.


They give us Dell Linux machines from work. They suck so bad and we have so many problems. Overheating, camera is terrible, performance is bad relatively to the huge weight of the device. Everything is a huge step down from Macs.

Whenever I see Linux people comparing Linux and Mac I'm amazed at the audacity. They are not in the same league. Not by a mile. Even the CLI is more convenient on the Mac which is truly amazing to me.


Prefer my Konsole setup on KDE and I use both interchangeably all day tbh. Camera yea. The irony is heating issues become less of an issue with arm.

What happened to all the love for Framework?

The honeymoon of Lego-brick replaceable USB ports is over?


Well they do have the Max+ 395 - 128GB beast https://frame.work/desktop

Which is none trivial. The laptop scene is particularly difficult though.


I have a personal Framework 13 and a work-issued MacBook Pro. I love Framework’s mission of providing user-serviceable hardware; we need upgradable, serviceable hardware. However, the battery life on my MacBook Pro is dramatically better than on my Framework. Moreover, Apple Silicon offers excellent performance on top of its energy efficiency. While I use Windows 11 on my Framework, I prefer macOS.

Additionally, today’s sky-high RAM and SSD prices have caused an unexpected situation: Apple’s inflated prices for RAM and SSD upgrades don’t look that bad in comparison to paying market prices for DIMMs and NVMe SSDs. Yes, the Framework has the advantage of being upgradable, meaning that if RAM and SSD prices decrease, then upgrades will be cheaper in the future, whereas with a Mac you can’t (easily) upgrade the RAM and storage once purchased. However, for someone who needs a computer right now and is willing to purchase another one in a few years, then a new Mac looks appealing, especially when considering the benefits of Apple Silicon.


>>At least 9 out of every 10 software engineers I know does all their development on a mac

I work in video games, you know, industry larger than films - 10 out of 10 devs I know are on Windows. I have a work issued Mac just to do some iOS dev and I honestly don't understand how anyone can use it day to day as their main dev machine, it's just so restrictive in what the OS allows you to do.


It makes sense that you use Windows in a video game company. We use windows as well at work and it's absolutely awful for development. I would really prefer a Linux desktop, especially since we exclusively deploy to Linux.

Weird .. macOS is still completely open is my experience. Can you give an example?

I compile a tool we use, send it to another developer, they can't open it without going through system settings because the OS thinks it's unsafe. There is no blanket easy way to disable this behaviour.

We also inject custom dlibs into clang during compilation and starting with Tahoe that started to fail - we discovered that it's because of SIP(system integrity protection). We reached out to apple, got the answer that "we will not discuss any functionality related to operation of SIP". Great. So now we either have to disable SIP on every development machine(which IT is very unhappy about) or re-sign the clang executable with our own dev key so that the OS leaves us alone.


If SIP is kicking in, it sounds like you're using the clang that comes with Apple's developer tools. Does this same issue occur with clang sourced from homebrew, or from LLVM's own binary releases?

If it's being sent to another developer then asking them to run xattr -rd com.apple.quarantine on the file so they can run it doesn't seem insurmountable. I agree that it's a non-starter to ask marketing or sales to do that, but developers can manage. Having to sign and then upload the binary to Apple to notarize is also annoying but you put it in a script and go about your day.

But Apple being "completely open", it is not.


>it's just so restrictive in what the OS allows you to do.

The people using them typically aren't being paid to customize their OS. The OS is good for if you just want to get stuff done and don't want to worry about the OS.


I work as a consultant for the position, navigation, and timing industry and 10 of 10 devs were on Windows. Before that I worked for a big hollywood company and while scriptwriters and VP executive assistants had Macs, everyone technical was on Windows. Movies were all edited and color graded on Windows.

Webshitters don't "engineer" anything, it's insulting you would insinuate that.

Anyone who watched the Artemis landing yesterday would have been keen to notice all the Windows PCs in use at Mission Control — nearly all hosting remote Linux applications.

Not a Mac in sight.

They were using VLC on Windows in space.

If all the Macs in the world disappeared tomorrow, everything essential would somehow continue unabated.


> Has apple been a serious development platform in the last 20 years?

This is one of those comments that is so far away from reality that I can’t tell if it’s trolling.

To give an honest answer: Using Macs for serious development is very common. At bigger tech companies most employees choose Mac even when quality Linux options are available.

I’m kind of interested in how someone could reach a point where they thought macs were not used for software development for 20 years.


> I’m kind of interested in how someone could reach a point where they thought macs were not used for software development for 20 years.

If you work with engineering or CAD software then Macs aren't super common at all. They're definitely ubiquitous in the startup/webapp world, but not necessarily synonymous with programming or development itself.


It is a weird situation. Apple products are consumer products but they make us use them as development hardware because there is no other way to make software for those products.

Making software for other Apple products pretty low on the reasons I use a MBP.

128GB of RAM and an M4 Max makes for a very solid development machine, and the build quality is a nice bonus.


An artificial limit on the number vms you are allowed to launch doesn't make it solid

Apple had real Unix a decade before the Linux crap was made, a bad unix copy. Nextstep was much better than Linux crap. "A budget of bad ideas" is what Alan Kay said about Linux [1], he invented the personal computer.

My 1987-1997 ISP was based on several different Unix running on Apple, probably long before you where born.

Apple built several supercomputers.

[1] https://www.youtube.com/watch?v=rmsIZUuBoQs

[2] Founder School Session: The Future Doesn't Have to Be Incremental https://www.youtube.com/watch?v=gTAghAJcO1o


Alan Kay invented a dead end (smalltalk). Meanwhile Linux became the future.

Apple had a terrible Unix until they bought NextStep.


Are you talking about A/UX? That was one of the first Unix systems I was exposed to.

Yes but I had others too. BSD on both 68000 and PowerPC

Yeah, they were that, and for the last 20 years they have been the iphone company.

Anything being developed for the Apple ecosystem requires use of the Apple development platform. Maybe the scope could be called "unserious," but the scale cannot be ignored.

I am aware.

However having used Xcode at some point 10 years ago my belief is that the app ecosystem exists in spite of that and that people would never choose this given the choice.


> Has apple been a serious development platform in the last 20 years?

i dont think anyone asks this question in good faith, so it may not even be worth answering. see:

> I know a lot of devs like apple hardware because it is premium but OSX has always been "almost linux" controlled by a company that cares more about itunes then it does the people using their hardware to develop.

yea fwiw macs own for multi-target deployments. i spin up a gazillion containers in whatever i need. need a desktop? arm native linux or windows installations in utm/parallels/whatever run damn near native speed, and if im so inclined i can fully emulate x86/64 envs. dont run into needing to do that often, but the fact that i can without needing to bust out a different device owns. speed penalty barely even matter to me, because ive got untold resources to play around with in this backpack device that literally gets all day battery. spare cores, spare unified mem, worlds my oyster. i was just in win xp 32bit sp2 few weeks ago using 86box compiling something in a very legacy dependent visual studio .net 7 environment that needed the exact msvc-flavored float precision that was shipping 22 years ago, and i needed a fully emulated cpu running at frequencies that was going to make the compiler make the same decisions it did 22 years ago. never had to leave my mac, didnt have to buy some 22 year old thinkpad on ebay, this thing gave me a time machine into another era so i could get something compiled to spec. these techs arent heard of, but its just one of many scenarios where i dont have to leave my mac to get something done. to say its a swiss army knife is an understatement. its a swiss army knife that ships with underlying hardware specs to let you fan out into anything.

for development i have never been blocked on macos in the apple silicon era. i have been blocked on windows/linux developing for other targets. fwiw i use everything, im loyal to whoever puts forth the best thing i can throw my money at. for my professional life, that is unequivocally apple atm. when the day comes some other darkhorse brings forth better hardware ill abandon this env without a second thought. i have no tribalistic loyalties in this space, i just gravitate towards whoever presents me with the best economic win that has the things im after. we havent been talking about itunes for like a decade.


For me at least, not being Linux is a feature. Linux has always been “almost Unix” to the point where now it has become its own thing for better or worse. OS X was never trying to be Linux. It would be better if we still had a few more commercial POSIX implementations.

That is fair but in my experience most devs are targeting linux servers not BSD(or any other flavour) which is helped by OSX. If OSX was linux derived it would suit them just as well.

edit: I suppose I should also note the vast majority of people developing on mac books (in my experience anyway) are actually targeting chrome.


> I suppose I should also note the vast majority of people developing on mac books (in my experience anyway) are actually targeting chrome.

Point taken. Most developers probably make do with Linux containers rather than MacOS VMs.


There is no reality that macOS could be based on Linux.

Turns out, an operating system is more than just a kernel with some userspace crap tacked on top, unlike what Linux distros tend to be.


> Turns out, an operating system is more than just a kernel with some userspace crap tacked on top, unlike what Linux distros tend to be.

This is also my opinion of OSX, let's not pretend that the userland mess is the most beautiful part of OSX.

Apple has great kernel and driver engineering for sure but once you go the stack above, it's ducktape upon ducktape and you better not upgrade your OS too quickly before they fix the next pile they've just added.


Heterogeneity is the feature. The Linux ecosystem is better off for it (systemd, Wayland, dconf, epoll, inotify are all based on ideas that were in OS X first) and not being beholden to Linux is a competitive advantage for Apple everyone wins.

> fault tolerant distributed systems

I mean there were mainframes which could be described as that. IBM just fixed it in hardware instead of software so its not like it was an unknown field.


Even if that were actually true (it’s not in important ways) Google showed you could do this cheaply in software instead of expensive in hardware.

You’re still hand waving away things like inventing a way to make map/reduce fault tolerant and automatic partitioning of data and automatic scheduling which didn’t exist before and made map/reduce accessible - mainframes weren’t doing this.

They pioneered how you durably store data on a bunch of commodity hardware through GFS - others were not doing this. And they showed how to do distributed systems at a scale not seen before because the field had bottlenecked on however big you could make a mainframe.


I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.

I am willing to hear arguments for other approaches.


Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.

-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.

Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.

If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)


> Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.

Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).


Just popping in here because people seem to be surprised by

> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.

This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.

Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.


If you get enough of them they can start to look like cattle.

Still, they are all the same breed.


I'm willing to hear arguments for your approach?

it certainly has scale issues when you need to support larger deployments.

[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]


So, do you see now the assumptions baked in your argument?

> when you need to support larger deployments

> shipping

> passing it off to someone else


On every project I've worked on, the PC I've had has been much better than the minimum PC required. Just because I'm writing code that will run nicely enough on a slow PC, that doesn't mean I need to use that same slow PC to build it!

And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.


So I get you don't do neither cloud, embedded, game consoles, mobile devices.

Quite hard to build on the exact hardware for those scenarios.


What?! seriously?!

I’ve never heard of anyone doing that.

If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.

To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.

The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.

Hell, I work in gamedev and we cross compile basically everything for consoles.


… not everyone uses the cloud?

Some people, gasp, run physical hardware, that they bought.


We use physical hardware at work, but it's still not the way you build/deploy unless it's for a workstation/laptop type thing.

If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.


And all your deployed and dev machines run the same spec- same CPU entirely?

And you use them for remote development?

I think this is highly unusual.


Lots of organizations buy many of a single server spec. In fact that should be the default plan unless you have a good reason to buy heterogeneous hardware. With the way hardware depreciation works they tend to move to new server models “in bulk” as well, replacing entire clusters/etc at once. I’m not sure why this seems so foreign to folks…

Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)


There is a large subset of devs who have worked their entire career on abstracted hardware which is fine I guess, just different domains.

The size of your L1/L2/L3 cache or the number of TLB misses doesn't matter too much if your python web service is just waiting for packets.


So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?

No? That would be ridiculous. You’re inventing dumb scenarios to make your argument work.

It’s more like: some organizations buy many of the same model of server, make one or two of them their build machines, and use the rest as production. So it’d be totally fine to use march=native there.

You just wouldn’t use those binaries anywhere else. Devs would simply do their own build locally (why does everyone act like this is impossible?) and use that. And obviously you don’t ship these binaries to customers… but, why are we suddenly talking about client software here? There’s a whole universe of software that exists to be a service and not a distributed binary, we’re clearly talking about that. Said software is typically distributed as source, if it’s distributed at all.

There’s a thousand different use cases for compiling software. Running locally, shipping binaries to users, HPC clusters, SaaS running on your own hardware… hell, maybe you’re running an HFT system and you need every microsecond of latency you can get. Do you really think there are no situations ever where -march=native is appropriate? That’s the claim we’re debunking, the idea that "-march=native is always always a mistake". It’s ridiculous.


Because might makes right and any entity with the power to legally put up a fight is in on the game (or wants to be)

It isnt that weird. Just look at the gemini-cli repo. Its a gong show. The issue is that LLMs can be wrong sometimes sure but more that all the existing SDL were never meant to iterate this quickly.

If the system (code base in this case) is changing rapidly it increases the probability that any given change will interact poorly with any other given change. No single person in those code bases can have a working understanding of them because they change so quickly. Thus when someone LGTM the PR was the LLM generated they likely do not have a great understanding of the impact it is going to have.


Google is not incentivized to show you good results. You don't pay them, advertisers do and that is who they are working for.

Their job is provide you just enough "results" that you don't or cant go any where else.

No more, no less.


Why would you assume that to be?

They might charge you less, but they don't have to and wont if the market allows it


Companies compete by optimizing margins. Lower margins, more sales, and more customers for more forward looking sale. Higher margins, more profit per sale.

That's a "fixed" constraint, because maximizing future adjusted value is what companies do.

So they don't play little games with mass products. If they did they would be harming their own bottom line/market cap.

(For small products, careful optimization often doesn't happen, because they are not a priority.)

Note this thesis explains what is going on here. What was previously one kind of customer (wide distribution of use), is now identifiably two. The non-automated token maxers (original distribution) and automated token maxers (all maxed, and growing in number). To maintain margins Anthropic has to move the latter to a new bin.

But the customer centric view also holds. By optimizing margins, that counter intuitively incentivizes reduced pricing on lower utilized products. (Because margin optimization is a balance to optimize total value, i.e. margins are not the variable being maximized.)

The alternatives would be bad for someone. Either they under optimize their margins, or change regular customers more which is unfair. Neither of those would be a rational choice.

(Fine tuning: Well run companies don't play those games. But companies with sketchy leaders do all kinds of strange things. Primarily because they are attempting manage contradictory stories in order to optimize their personal income/wealth over the companies. But I don't see Anthropic in that category.)


Cell was PS3 and the Will used a Power cpu.

IBM had a hand in both however


DDR4 is also crazy expensive right now so this just depends on you having some around from a previous build

It actually seems to be slightly less expensive than DDR5, perhaps due to the lower throughput that makes it uncompetitive for AI-adjacent workloads.

If you're willing to check the used market it is more affordable as the spike isn't as severe as used DDR5.

Those will dry up soon enough. Corporate laptop refreshes will be drawn out as they try and cost save on the increased price.

You also better hope the aliexpress dont figure out a way to get the RAM out of those things because they will start harvesting it for sure if there is money to made.


> Those will dry up soon enough.

We're talking about a pi replacement. The Pi 5 is slower than a 10yo laptop. That's gives us a very vast pool of used laptops.

> You also better hope the aliexpress dont figure out a way to get the RAM

That is a real worry and I can see used machines being gutted because selling DDR3/4/5 sticks is way easier and profitable than the whole machine. Adapters for SODIMM to regular DIMM are readily available and cheap, too.


Windows 11 requiring a TPM is still going to force a decent number of replacements: extended support on W10 is $61 Y1, $122 Y2, $244 Y3.

Delaying that refresh might actually end up the more expensive option.


I recently did an install of Windows 11 on a machine without TPM

To bypass the check during installation:

    Boot the laptop from your USB.

    When you see the "This PC can't run Windows 11" screen, press Shift + F10 to open a command prompt.

    Type regedit and hit Enter.

    Navigate to HKEY_LOCAL_MACHINE\SYSTEM\Setup.

    Right-click Setup, create a new Key named LabConfig.

    Inside LabConfig, create two DWORD (32-bit) values:

        BypassTPMCheck = 1

        BypassSecureBootCheck = 1

    Close the registry and the command prompt; the installer will now let you proceed.

It's a never-ending cat-and-mouse game, and unsupported hacks like these usually aren't well-received in corporate environments. Decent stop-gap for home use, though!

> Those will dry up soon enough.

And worse, they're shucking surplus for RAM And SSD's now. I am seeing more and more eBay auctions for surplus PC's sans SSD and RAM. So the second hand market is going to be invaded by the reseller parasites leaving us with $50 CPU-in-a-box and $500+ RAM/SSD parts


The EDU Neo is $500, too bad it’s not as versatile.

What is the edu neo

The MacBook Neo’s education price of $499

It blows my mind that a Pi is a significant portion of the cost of it.

And the Pi doesn't even come with a monitor, keyboard, speakers, or power supply!

I’d bet a lot that the Neo has a better SSD in it too.

Having a SSD certainly is better than no SSD.

The Pi isnt a loss leader for user acquisition nor do they get to enjoy Apples economy of scale. Apple can take a small loss on this and it will still be worth it if they retain the users in their ecosystem.

Is there any evidence that’s the case? They always had massively bigger margins than all other PC manufacturers so it’s unlikely they are selling it at a loss even if’s significantly reduced

I mean, it's Apple we're talking about. Selling at margins <50% can probably be considered "at a loss"

For most older laptops it's easy enough, you just open them up and take the RAM sticks out. There are SO-DIMM to DIMM adapters to fit a laptop memory stick in a DIMM socket.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You