This resonates so much. As someone who's more on the builder/product side than engineering, I've always felt that barrier with Python tooling. The learning curve for environment management has been one of those silent productivity killers.
What strikes me about uv is that it seems to understand that not everyone launching a Python-based project has a CS degree. That accessibility matters—especially in the era where more non-engineers are building products.
Curious: for those who've switched to uv, did you notice any friction when collaborating with team members who were still on traditional setups? I'm thinking about adoption challenges when you're not a solo builder.
That second quote hits hard. Physics got so good at answering questions that people forgot to check if they were asking the right ones. Same thing happens in tech - we're really good at optimizing for metrics, terrible at asking if those metrics matter.
The issue is everyone's optimizing for blog post metrics, not actual problems. "Look at this new pattern!" gets clicks. "We kept it simple and it just works" doesn't. Same thing happened with microservices - everyone rushed in because it sounded cool, then spent years dealing with distributed systems hell.
This kind of thing is how you actually learn what's under the hood. Everyone's building with React Native and Flutter, which is fine until something breaks. Then you're stuck Googling black magic. Starting from assembly teaches you the real cost of abstraction.
You have a very long way between assembly and RN/Flutter. I do agree that it helps to know these things, but you need to learn a lot more before it becomes more generally applicable.
Is this really low level though? Because its hooking UIKit which is very high level relative to ASM. I'd be really curious to see an app draw on iOS without UIKit. I don't know if thats possible.
You can write directly to the frame buffer, like a video game. You still need the UIKit import to publish, because it has to be bundled into a .ipa which requires an AppDelegate, a UIBundle, among other things.
If you want to “technically” avoid UIKit, you can drop one step lower. UIKit is implemented on CoreAnimation. A bare UIView is nearly a pass through wrapper around CALayer. It wouldn’t be hard to build your own custom UI on CALayers. The old CA tutorials for implementing a ScrollView from the ground up are still floating around out there.
And even that won't do it, because within the constraints of iOS, eventually that framebuffer with software rendering has to be displayed on the screen via an OS API, which is UI Kit.
If you enable the JIT entitlement for personal development, then bundle a mach-o into an entitled app. Or compile it directly on the app and mprotect-x to execute it. Is there something else you can’t do that I’m not considering? I might give this a try.
Is syscall a public API on iOS? In the end, you have to call that to get anything on the screen?
Looking at unistd.h, it seems marked as
__OS_AVAILABILITY_MSG(ios,deprecated=10.0,"syscall(2) is unsupported; "
"please switch to a supported interface. For SYS_kdebug_trace use kdebug_signpost().")
I doubt you can render an UI in pure Assembly and show it on the screen without going through UI Kit in a non-rooted device, given that even the device drivers extension points is quite limited.
Which was the whole discussion point that started the thread, how to make a iOS app with zero references to UI Kit.
This isn't an 8 and 16 bit home computers, or games console, with an address for the framebuffer.
All this teaches is how to put parameters on stack, pass them to functions and use the results. It is pretty much a transliteration of what you would do in C.
It's still very educational. It shows how ObjC method calls work under the hood, because even calling objc_msgSend() from plain C involves a certain amount of non-obvious magic (because of the variable argument list and return types).
And tbh I'm kinda surprised how little assembly code it is, less than most UI framework hello-worlds in high level languages ;)
I'm not sure this is entirely fair though I think you're mostly right. The comment you're replying to is right in terms of the value of understanding one or more levels of abstraction below the one you're working in. Conversely you're right in that learning assembler isn't going to do much to help you debug a failing Flutter app. It's just attacking the abstraction stack in detail from the opposite end - equally myopic.
But none the less valuable because of the additional perspective it brings. That's the real point of it, another lens through which to view and understand the mechanics of the application.
British Museum actually does this with their Greek statues - shows how they were painted. The gap between "marble perfection" and "gaudy colors" is wild. Makes you realize how much our idea of classical taste is just patina.
One view is that the western idea of "good taste" was informed by people looking at greek and roman statues and buildings and incorrectly assuming they were always intended to be plain.
The whole Perl era shaped so much of how we think about text processing. It's funny how tools cycle - awk is "new" again because we forgot the middle chapter. Same thing is happening with Rust vs C - people rediscovering memory safety like it's a fresh idea.
Reddit SEO is a goldmine right now to get great organic visibility outside of the platform. I get the hype, and I already talk about my SaaS there, but is it any good for training AI models?
It’s kind of wild how we end up here over and over, a big government breach, angry headlines, but the tech never seems to change (imo).
If you work in IT, this whole SharePoint story is probably a deja vu,
A few real-world points that stood out to me:
- SharePoint (and a lot of other MS stuff) didn’t win because it was bulletproof, just because it was bundled “FREE” and nobody got fired for rolling it out in the 2000s. Once you’re deep into the Microsoft ecosystem, the cost and pain of replaccing is huge!
- Security honestly feels like a service for a lot of giants. When someone asks if it’s the number one priority, the answer from experiencem, is “no.”
Cost, compliance available support, and how easy it is to blame a vendor if things fail tend to matter more.
- When people say Linux would be more secure in these environments, maybe. But if Linux or Red Hat took over everywhere, you can bet it would become the juiciest target immediately. Right now, Windows gets a lot of attention because it’s everywhere. And obviously, attackers like to go where the odds of a big payoff are highest.
- A lot of giants aren’t making decisions based only on security or technical merit. It’s about familiarity, employee training costs, consulting partners, and “safe” bets. If you pick Microsoft and get breached, it’s an industry problem. If you pick something niche and get breached... it’s 100% your fault.
- Resistance to change is real. Swapping out platforms isn’t just a technical lift. Management, end users, even IT staff get pretty set in their ways.
Honestly, unless there’s enough public backlash or a relgulation hammer, I don’t see the inertia breaking any time soon. For most companies, “patch and carry on” still beats “burn it all down and start fresh.”
While I agree with you on most points, security is never the number one priority. If it were we'd all destroy our computers, never write anything down, and simply accept the collapse of society. Security is always weighed against many other priorities such as authorised users being able to access data, and ease of use. A unique 128 character password for each document would have high security, but be widely considered unacceptable even in a system handling classified material.
This is the crux of the issue. The CIA triad (confidentiality, integrity and availability) are the root of all security. However, those goals are often self-contradictory.
There will always, for example, be a conflict between availability and confidentiality. Ultimate confidentiality might require that the data be stored in an inaccessible bunker with no outside access. Ultimate availability might involve hosting sensitive data on a publicly accessible server with no access controls.
In the real world we must always balance these needs carefully, and triage available resources to achieve an "ideal" outcome. This means that security will never, and can never, be a solved problem.
The CIA triad comes from an agency that spies on people so I wonder if it truly is a comprehensive philosophy of security. It might be an attempt to confuse those they spy on with the intent to encourage security gaps. Philosophies of any kind are notorious for not being comprehensive or provable. Is there any research that tries to verify this philosophy? I worked on computer security for a few decades and I've never seen a justification for the CIA triad. The security community used to say that advanced persistent threats were "out of scope" because "the cost to defend against them was too high", but today they obviously are not out of scope because APT's are everywhere. Possibly the triad is a false legacy assumption as well. It seemed cool because it came from the CIA, but is it true? Even if it is reasonably true, is it complete?
As an example, diplomacy, open source, shared interests, universal basic income, and education can reduce the desire for attacking. How do these factor into the CIA triad?
I would argue that all models are inherently incomplete because they are models (IE - they are the map not the territory). Rather than worrying about completeness, it's better to ask if the model is useful, and if anything would change about the requirement for tradeoffs in security if we used a more complete model?
I would answer that the triad IS useful in this scenario and further that if we used an alternative model (The 7-C's maybe?) we would still find inherently contradictory requirements for almost every security scenario. In fact, we would just MORE more of those trade-offs, further proving that security can never be "perfect."
For example, I can think of several fundamentals the triad doesn't cover directly. Privacy and non-repudiation spring to mind as concepts that don't neatly fit into the CIA triad, but they are the antithesis of each other!
Perfect privacy would require that nobody (including data-owners) can identify the user, and perfect non-repudiation would require that no access be granted without 100% proof of the current user. Again, you are forced to choose and this means that some aspect will always be less than perfect.
> If it were we'd all destroy our computers, never write anything down, and simply accept the collapse of society.
No, this is the same sort of defeatism that prevents us from making progress on security. We could engineer usable systems where actual security is a priority, and not just security theater. We don't because nobody in a position to change anything actually gives a shit.
You’re implying any real system can have a single top priority, which is equally false. There are always multiple priorities, and the one sitting at the top changes based on the context
> We could engineer usable systems where actual security is a priority,
Security is a priority. But it's not the only priority.
It would be difficult engineering even if it was the only priority, but given that there's little point to security for a system you never deploy, it's not likely to ever completely monopolize focus, either for users or implementers.
At this point i don't think security is a priority at all for companies like MS. Marketing themselves has having security is a priority. Doing the bare minimum to avoid lawsuits is their priority.
Ultimately though, they know that no matter how many times their failure to invest in security results in their customer's data being compromised or destroyed they'll keep making money.
Their customers are corporations who have insurance to cover their expenses when Microsoft's failure to make security a priority inevitably leads to a breech and those corporations are able to avoid all accountability for their decision to use Microsoft products no matter who else gets hurt as a result.
Dealing with yet another security issue caused by Microsoft is just another cost of doing business. It's still cheaper and/or easier for the corporations to keep MS and deal with the endless vulnerability/patch cycle than it is to move to something else and pay people who know what they're doing to manage those new systems so nothing changes.
> When people say Linux would be more secure in these environments, maybe. But if Linux or Red Hat took over everywhere, you can bet it would become the juiciest target immediately.
I do not think that is the only difference between Windows and Linux though.
For one thing Linux has multiple distros, some very varied. Its less of a monoculture. If Linux was more widely used it would also get grater usage for BSDs because a lot of things that run on Linux will run on them too.
Linux IS very widely used on servers, and on Chromebooks, and embedded. The kernel and a few other bits are widely used on phones too.
Android was designed from day one specifically to be a leaky sieve that funnels as much of your personal and private data to Google and their partners as possible. They're left with the impossible task of making it harder for third parties to gain access to the data they're collecting without making it too hard for them to collect it for themselves.
> In what world has SharePoint Server and SharePoint Standard + Enterprise User CALs ever been "FREE"?
Yeah.. I think people say "bundled FREE" when they really referring to MS enterprise packages. It's similar to how Comcast will sell you TV for $100, land line for $20, internet for $100, but you can get a TV/land line package for $90? or a TV/internet for $130. You can "bundle FREE" phone on your TV/internet package for an extra $5. (And yes, I heard support before tell me "For $10 more a month, you get a free upgrade to 1Gbps". ???? How is that free? They will say "It's the same package, but one level up for $10 more. It comes with free 1Gbps upgrade. what doesn't make sense?"
Generally EA licensing didn't work like that. You still picked individual SKUs (Windows client, Windows Server, SQL Standard, etc), you simply got some level of discount, maybe eval licenses, some level of support, a TAM(CSAM), and paid when you tru-up (was three years, may be different now).
There wasn't, as far as I recall, "buy SQL Server Enterprise, get SharePoint Server Enterprise SKU for free" type licensing deals.
> There wasn't, as far as I recall, "buy SQL Server Enterprise, get SharePoint Server Enterprise SKU for free" type licensing deals.
Yes, I didn't mean to say it was like that. More that you get discounts, credits, etc. Every EA agreement I heard of seemed custom and different for that enterprise needs. Throwing in Azure credits or a discount on a product if you get another product or increase volume, etc seemed to be typical.
Yes, you're correct, especially now in the days of Azure, credits are typical there, though they often require you to migrate (or create) some amount of workload to qualify, i.e., Microsoft knows they'll make that money back long term.
Something to understand here is that Sharepoint is not Windows. Sure it runs on Windows, but the vulnerability here was the application. Are we going to argue that applications that run on Linux cannot have security vulnerabilities? Especially large archaic enterprisey things like this?
I bet Oracle and SAP have similar types of things happen to their application suites but no one runs public websites on Oracle eApplications (yeah, plenty of companies have that exposed to the internet, but it's not The Company's Website)
What strikes me about uv is that it seems to understand that not everyone launching a Python-based project has a CS degree. That accessibility matters—especially in the era where more non-engineers are building products.
Curious: for those who've switched to uv, did you notice any friction when collaborating with team members who were still on traditional setups? I'm thinking about adoption challenges when you're not a solo builder.