> Over hiring is one thing.. but that wouldnt be a problem if there was an endless stream of projects to take that are value creating.
I very much agree.
A lot of tech job growth during the late 2010 and pandemic period were frankly BS for a ROI perspective. Late 2010s was really the first time in tech that I started to feel like most of the stuff that needed to be built was built, and increasingly I was working on BS projects offering less and less value every year.
Consider:
- In the 80s developers were needed to write fundamental business software for word processing and spreadsheets
- In the 90s computers became mainstream and there was a huge demand for consumer software
- In the 00s the internet took off and we needed people build the web
- In the 10s the smart phone revolutionised computing and we needed people to build apps and rebuild websites to be mobile-first
But towards the late 10s entrepreneurs and investors seemingly ran out of no-brainer tech investments so increasingly started trying mental stuff still promising tech-like returns – block-chain, metaverse, Web 3.0, [insert traditional industry here] but a tech company.
I'm not saying there's nothing to build or maintain anymore, but I also no longer see where people think the exponential need for new software and software developers could come from, and I suspect this would have become obvious earlier if it wasn't for ZIRP.
But it's not a lack of productive things to build. We also have other trends hurting demand for new SWEs today. Consider how today completely non-technical people can start and scale an ecommerce company without any developers. Things that would have taken armies of developers just 10-15 years ago, can now be largely done in an afternoon on platforms like Shopify. It's actual hard to believe that just 15 years ago selling things online used to be very hard if you weren't technical.
Similarly starting in the early 2010s even being a developer got significantly easier because increasingly there was packages for everything. Things I might have spent weeks building before could now be built in days or less. And another thing that changed was sites like stackoverflow and blogs which help you solve problems and learn new skills. I remember trying to learn how to do things before 2010s was hard, and before the 00s it very hard.
And of course now we also have AI coding tools which don't just hurt the overall demand for developers, but effectively expands the supply of developers to anyone with an internet connection and computer.
So to summerise:
- There's much fewer good investments to be made in new software today.
- Where there are investments to be made you need far less developers.
- When you need developers there's far more people who can do the job.
Even if tech companies are doing well and the number of tech jobs is increasingly, the above means the average person trying to find a job in tech today will find it much, much harder than they have in the past. People working in tech today genuinely should consider a career change if they're primarily in tech for the money.
In calculus the core issue is that the concept of a "function" was undefined but generally understood to be something like what we'd call today an "expression" in a programming language. So, for example, "x^2 + 1" was widely agreed to be a function, but "if x < 0 then x else 0" was controversial. What's nice about the "function as expression" idea is that generally speaking these functions are continuous, analytic [1], etc and the set of such functions is closed under differentiation and integration [2]. There's a good chance that if you took AP Calculus you basically learned this definition.
The formal definition of "function" is totally different! This is typically a big confusion in Calculus 2 or 3! Today, a function is defined as literally any input→output mapping, and the "rule" by which this mapping is defined is irrelevant. This definition is much worse for basic calculus—most mappings are not continuous or differentiable. But it has benefits for more advanced calculus; the initial application was Fourier series. And it is generally much easier to formalize because it is "canonical" in a certain sense, it doesn't depend on questions like "which exact expressions are allowed".
This is exactly what the article is complaining about. The non-rigorous intuition preferred for basic calculus and the non-rigorous intuition required for more advanced calculus are different. If you formalize, you'll end up with one rigorous definition, which necessarily will have to incorporate a lot of complexity required for advanced calculus but confusing to beginners.
Programming languages are like this too. Compare C and Python. Some things must be written in C, but most things can be more easily written in Python. If the whole development must be one language, the more basic code will suffer. In programming we fix this by developing software as assemblages of different programs written in different languages, but mechanisms for this kind of modularity in formal systems are still under-studied and, today, come with significant untrusted pieces or annoying boilerplate, so this solution isn't yet available.
[1] Later it was discovered that in fact this set isn't analytic, but that wasn't known for a long time.
[2] I am being imprecise; integrating and solving various differential equations often yields functions that are nice but aren't defined by combinations of named functions. The solution at the time was to name these new discovered functions.
Linux unfortunately needs to be a Windows that's better than Windows to a lot of people unfortunately. It must support all their hardware and software perfectly and can never have any issues, only then will it be an accepted alternative. Probably because it's free and they want it to work on their existing setup.
Mac users paid money for their choice, so ironically they are more forgiven for the inability to run some Office VBA macros, work with that random MST dual display dongle or whatever. They rationalize their expensive purchase as a good decision and that it's good enough and possible to solve issues encountered like spending 5 times as much on Thunderbolt dock to do what the $30 MST dongle did or learn some entirely new $10 app to do what they did on Windows with something else.
I think consumers largely have stopped taking out their wallet for Microsoft, at least enough of them to cause Microsoft to start walking back a little bit.
Nearly 1 billion PCs have stayed on Windows 10, 42% of the global desktop marketshare is still on 10 despite EOL. Linux has been showing consistent growth on the steam hardware survey as well, and time will tell but I have a feeling the MacBook Neo is going to put another nail in Microsoft's consumer coffin.
The problem for us is that's such a tiny margin of Microsoft's customer base. They aren't a consumer company anymore. For Microsoft to feel the pain, we need the big legacy enterprises to start ripping out Windows (and by extension, rip out Windows Server, Azure, M365).
Us here on HN are in a unique position to help, with many of us having influence on or even the authority to make technical decisions for the companies we work for. Its not enough to stop buying Microsoft at home, we all need to stop buying Microsoft at work.
During the year 1986, there was an anomalous increase in LSI memory problems. Electronics in early 1987 appeared to have problem rates approaching 20 times higher than predicted. In contrast, identical LSI memories being manufactured in Europe showed no anomalous problems. Because of knowledge of the radioactivity problem with the Intel 2107 RAMs, it was thought that the LSI package probably was at fault, since the IBM chips were mounted on similar ceramic materials. LSI ceramic packages made by IBM in Europe and in the U.S. were exchanged, but the European computer modules (with European chips and U.S. packaging) showed no fails, while the U.S. chips with European packages still failed at a high rate. This indicated that the problem was undoubtedly in the U.S.-manufactured LSI chips. In April 1987, significant design changes had been made to the memory chip with the most problems, a 4Kb bipolar RAM. The newer chip had been given the nickname Hera, and so at an early stage the incident became known as the "Hera problem."
By June 1987, the problem was very serious. A group was organized to investigate the problem. The first breakthrough in understanding occurred with the analysis of "carcasses" from the memory chips (the term carcasses refers to the chips on an LSI wafer which do not work correctly, and are not used but saved in case some problem occurs at a future time). Some of these carcasses were shown to have significant radioactivity.
Six weeks was spent in the manufacturing process lines, looking for radioactivity, and traces were found inside various processing units. However, it could not be determined whether these traces came from the raw materials used, or whether they were transferred from the chips themselves, which might have been contaminated earlier in their processing. Further, it was discovered that radioactive filaments (containing radioactive thorium) were commonly used in some evaporators. A detailed analysis by T. Zabel of some of the "hot" chips revealed that the radioactive contamination came from a single source: Po210 This isotope is found in the uranium decay chain, which contains about twelve different radioactive species. The surprising fact was that Po210 was the only contaminant on the LSI chips, and all the other expected decay-chain elements were missing. Hundreds of chips were analyzed for radioactivity, and Po210 contamination was found going back more than a year. Then it was found that whatever caused the radioactivity problem disappeared on all wafers started after May 22, 1987. After this precise date, all new wafers were free of contamination, except for small amounts which probably were contaminated by other older chips being processed by the same equipment. Since it takes about four months for chips to be manufactured, the pipeline was still full of "hot" chips in July and August 1987. Further sweeps of the manufacturing lines showed trace radioactivity, but the plant was essentially clean. The contamination had appeared in 1985, increased by more than 1000 times until May 22, 1987, and then totally disappeared!
Several months passed, with widespread testing of manufacturing materials and tools, but no radioactive contamination was discovered. All memory chips in the manufacturing lines were spot-screened for radioactivity, but they were clean. The radioactivity reappeared in the manufacturing plant in early December 1987, mildly contaminating several hundred wafers, then disappeared again. A search of all the materials used in the fabrication of these chips found no source of the radioactivity. With further screening, and a lot of luck, a new and unused bottle of nitric acid was identified by J. Hannah as radioactive. One surprising aspect of this discovery was that, of twelve bottles in the single lot of acid, only one was contaminated. Since all screening of materials assumed lot-sized homogeneity, this discovery of a single bad sample in a large lot probably explained why previous scans of the manufacturing line had been negative. The unopened bottle of radioactive nitric acid led investigators back to a supplier's factory, and it was found that the radioactivity was being injected by a bottle-cleaning machine for semiconductor-grade acid bottles. This bottle cleaner used radioactive Po210 material to ionize an air jet which was used to dislodge electrostatic dust inside the bottles after washing. The jets were leaking radioactivity because of a change in the epoxy used to seal the Po210 inside the air jet capsule. Since these jets gave off infrequent and random bursts of radioactivity, only a few bottles out of thousands were contaminated.
An excerpt from:
Ziegler, James F., et al. "IBM experiments in soft fails in computer electronics (1978–1994)." IBM journal of research and development 40.1 (1996): 3-18
Polonium is debuggable. More subtle statistical aberrations would be exponentially harder.
> just like almost all transportation is done today via cars instead of horses.
That sounds very Usanian. In the meantime transportation in around me is done on foot, bicycle, bus, tram, metro, train and cars. There are good use cases for each method including the car. If you really want to use an automotive analogy, then sure, LLMs can be like cars. I've seen cities made for cars instead of humans, and they are a horrible place to live.
Signed, a person who totally gets good results from coding with LLMs. Sometimes, maybe even often.
But all work isn't done by LLMs at the moment and we can't be sure that it will be so the question is ridiculous.
Maybe one day it will be.. And then people can reevaluate their stance then. Until that time, it's entirely reasonable to hold the position that you just don't
This is especially true with how LLM generated code may affect licensing and other things. There's a lot of unknowns there and it's entirely reasonable to not want to risk your projects license over some contributions.
I use them all the time at work because, rightly or wrongly, my company has decided that's the direction they want to go.
For open source, I'm not going to make that choice for them. If they explicitly allow for LLM generated code, then I'll use it, but if not I'm not going to assume that the project maintainers are willing to deal with the potential issues it creates.
For my own open source projects, I'm not interested in using LLM generated code. I mostly work on open source projects that I enjoy or in a specific area that I want to learn more about. The fact that it's functional software is great, but is only one of many goals of the project. AI generated code runs counter to all the other goals I have.
Their default solution is to keep digging. It has a compounding effect of generating more and more code.
If they implement something with a not-so-great approach, they'll keep adding workarounds or redundant code every time they run into limitations later.
If you tell them the code is slow, they'll try to add optimized fast paths (more code), specialized routines (more code), custom data structures (even more code). And then add fractally more code to patch up all the problems that code has created.
If you complain it's buggy, you can have 10 bespoke tests for every bug. Plus a new mocking framework created every time the last one turns out to be unfit for purpose.
If you ask to unify the duplication, it'll say "No problem, here's a brand new metamock abstract adapter framework that has a superset of all feature sets, plus two new metamock drivers for the older and the newer code! Let me know if you want me to write tests for the new adapters."
I have noted many times that I had a slab phone with full screen color icon grid general purpose os with internet and countless 3rd party apps for every conceivable purpose,... 7 full years before the iphone. 8 years before the iphone had 3rd party apps.
And it wasn't Android it was a Samsung SPH-i300 running PalmOS.
It was great that there was not really much of an app store, you got apps individually more or less like desktop os apps. There might have been app stores that collected apps but I don't remember ever using any.
I had apps for everything the same as today. Even though the screen was only like 160x240 and the internet was 14.4k, I had browser & email of course, but also ssh, irc, I even had a vnc client! Audible.com player, countless random things like a netmask calculator, resistor color code app, a few different generic db apps where you design your own fields and input/display screens etc. 3rd party phone dialer that integrated the contacts db. I must be forgetting a hundred other things.
The OS wasn't open source but at least the apps could be, so pretty much like windows & mac.
All in all I'd prefer Android where the entire system is open, except Google has somehow managed to make the real world life with Android less open than PalmOS was, even though PalmOS wasn't open source and I think even the development system wasn't free either.
I think the "somehow" is the extremely integrated app store. Previously, if there were any app stores, they didn't really matter. It didn't hurt you not to be in them because hardly any users were either. But today it's basically just a technicality to say that you don't have to be in the official app store, and not even theoretically/technically true in many cases.
> Oil, water temperature and alternator warning lights would be replaced by a single 'general car default' warning light.
> Occasionally, for no reason, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key, and grabbed the radio antenna.
> Every time GM introduced a new model, car buyers would have to learn how to drive all over again because none of the controls would operate in the same manner as the old car.
> You would press the 'start' button to shut off the engine.
If you live long enough, satire eventually becomes reality.
I'm trying to work with vibe-coded applications and it's a nightmare. I am trying to make one application multi-tenant by moving a bunch of code that's custom to a single customer into config. There are 200+ line methods, dead code everywhere, tons of unnecessary complexity (for instance, extra mapping layers that were introduced to resolve discrepancies between keys, instead of just using the same key everywhere). No unit tests, of course, so it's very difficult to tell if anything broke. When the system requirements change, the LLM isn't removing old code, it's just adding new branches and keeping the dead code around.
I ask the developer the simplest questions, like "which of the multiple entry-points do you use to test this code locally", or "you have a 'mode' parameter here that determines which branch of the code executes, which of these modes are actually used? and I get a bunch of babble, because he has no idea how any of it works.
Of course, since everyone is expected to use Cursor for everything and move at warp speed, I have no time to actually untangle this crap.
The LLM is amazing at some things - I can get it to one-shot adding a page to a react app for instance. But if you don't know what good code looks like, you're not going to get a maintainable result.
They're also very good at demoralizing people who actually code for a living. Or at least the hype surrounding them is. Up until a couple years ago I'd simply dive into a new coding assignment and be excited to solve all the problems. Some new widget? Maybe I could find a way to make the UX flow better, or add a couple neat little transitions, or heck even improve the mechanics or the business logic. I now find myself looking at assignments and thinking: Is this a waste of my time? I know how I'd do it, but I'm prone to a lot of yak-shaving and perfectionism. Should I just ask Claude to do it?
So alright, let's see what Claude does.
And then I get presented with a piece of code and a product that I wouldn't have chosen - but in some cases would be perfectly sufficient for minor work I shouldn't be yak-shaving. On the other hand, going beyond the simplest or most obvious is how I built my business. Bringing ideas to the table and executing them. So then I discard the Claude code and sit down to write it from scratch.
It is at that exact point that I start to wonder: does anyone even care?
People who take the time to think deeply through their work, and who "hammer in the extra nail" as my grandpa used to say (he was a general contractor) do so most often for their own sense of pride in a job well done, and for the trust that comes from their clients or employers knowing that they will go the extra mile and do a job well. But what happens when employers or clients don't treat the work as important - and when they would be okay with a bad or mediocre version on the cheap? That's not a new problem - usually I just won't work for those people.
But I just hate the pessimistic feeling I get about doing what I always loved to do - writing bespoke code - when I constantly keep asking myself do these vibe coders know something more than I do? Should I try yet again on that route? Worse, am I just wasting my time perfecting something that no one else will appreciate?
To anyone outside engineering, writing code looks like wizardry. I think the most common and most demoralizing outcome of LLM vibe coding has been to incorrectly make them think it's suddenly easier. And a new crop of vibe coders who think they don't have to think for themselves? They're not engineers. They may even be enemies of good engineering. They have more in common with the bosses and clients we've always had who said "hey, this should be a very easy request, can you just add a button to the customer app that will (fill in the blank with some wildly complicated business transaction that definitely can't be handled in one click).
All of it is sapping my motivation to do what I was good at, which is, solving problems and writing well tested code.
In one hand, one is reminded on a daily basis of the importance of security, of strictly adhering to best practices, of memory safety, password strength, multi factor authentication and complex login schemes, end to end encryption and TLS everywhere, quick certificate rotation, VPNs, sandboxes, you name it.
On the other hand, it has become standard practice to automatically download new software that will automatically download new software etc, to run MiTM boxes and opaque agents on any devices, to send all communication to slack and all code to anthropic in near real time...
I would like to believe that those trends come from different places, but that's not my observation.
My favorite example of this is how Windows NT has had async IO forever, while also being notorious for having slower storage performance than Linux. And when Linux finally got an async API worth using, Microsoft immediately set about cloning it for Windows.
Theoretical or aesthetic advantages are no guarantee that the software in question will actually be superior in practice.
Perhaps it depends on the nature of the tech-debt. A lot of the software we create has consequences beyond a paticular codebase.
Published APIs cannot be changed without causing friction on the client's end, which may not be under our control. Even if the API is properly versioned, users will be unhappy if they are asked to adopt a completely changed version of the API on a regular basis.
Data that was created according to a previous version of the data model continues to exist in various places and may not be easy to migrate.
User interfaces cannot be radically changed too frequently without confusing the hell out of human users.
My story is simpler. Microsoft dropped the support for Windows 10 and gave me no upgrade path to Windows 11 because my CPU was 5 years too old apparently.
So I installed Fedora on that machine, I learned the process, I went through the hurdles. It wasn’t seamless. But, Fedora never said “I can’t”. When it was over, it was fine.
Only if Microsoft had just let me install Windows 11 and suffer whatever the perf problem my CPU would bring. Then I could consider a hardware upgrade then, maybe.
But, “you can’t install unless you upgrade your CPU” forced me to adopt Linux. More importantly, it gave me a story to tell.
There is a marketing lesson there somewhere, like Torvalds’ famous “you don’t break userspace”, something along the lines of “you don’t break the upgrade path”.
There's a lot of chatter here about macOS' Unix certification. But in a post shared by another user, it appears that the actual content of that Unix certification vindicates OP— macOS' official Unix compatibility requires disabling SIP:
> So, if you want your installation of macOS 15.0 to pass the UNIX® 03 certification test suites, you need to disable System Integrity Protection, enable the root account, enable core file generation, disable timeout coalescing, mount any APFS partitions with the strictatime option, format your APFS partitions case-sensitive (by default, APFS is case-insensitive, so you’ll need to reinstall), disable Spotlight, copy the binaries uucp, uuname, uustat, and uux from /usr/bin to /usr/local/bin and the binaries uucico and uuxqt from /usr/sbin to /usr/local/bin, set the setuid bit on all of these binaries, add /usr/local/bin to your PATH before /usr/bin and /usr/sbin, enable the uucp service, and handle the mystery issues listed in the four Temporary Waivers.
So it seems very fair to say then, that features like SIP and the SSV are genuine turns away from Unix per se, even given the fact of the certification.
> The consequence of this is that when receiving inbound traffic, the router needs needs to be configured with where to send the traffic on the local network. As a result, it will drop any traffic that doesn’t appear in the “port forwarding” table for the NAT.
As I keep trying to explain each time this comes up: no, it doesn't and it won't.
When your router receives incoming traffic that isn't matched by a NAT state table entry or static port forward, it doesn't drop it. Instead, it processes that traffic in _exactly_ the same way it would have done if there was no NAT going on: it reads the dst IP header and (in the absence of a firewall) routes the packet to whatever IP is written there. Routers don't drop packets by default, so neither will routers that also do NAT.
Of course, this just strengthens your point that NAT isn't security.
What's reasonable is: "Set reserved fields to 0 when writing and ignore them when reading." (I heard that was the original example). Or "Ignore unknown JSON keys" as a modern equivalent.
What's harmful is: Accept an ill defined superset of the valid syntax and interpret it in undocumented ways.
I like to say that there are two primary factors when we talk about how "fast" a language is:
1. What costs does the language actively inject into a program?
2. What optimizations does the language facilitate?
Most of the time, it's sufficient to just think about the first point. C and Rust are faster than Python and Javascript because the dynamic nature of the latter two requires implementations to inject runtime checks all over the place to enable that dynamism. Rust and C simply inject essentially zero active runtime checks, so membership in this club is easy to verify.
The second one is where we get bogged down, because drawing clean conclusions is complicated by the (possibly theoretical) existence of optimizing compilers that can leverage the optimizability inherent to the language, as well as the inherent fragility of such optimizations in practice. This is where we find ourselves saying things like "well Rust could have an advantage over C, since it frequently has more precise and comprehensive aliasing information to pass to the optimizer", though measuring this benefit is nontrivial and it's unclear how well LLVM is thoroughly utilizing this information at present. At the same time, the enormous observed gulf between Rust in release mode (where it's as fast as C) and Rust in debug mode (when it's as slow as Ruby) shows how important this consideration is; Rust would not have achieved C speeds if it did not carefully pick abstractions that were amenable to optimization.
And adding to this: using the card gives me peace of mind because it never runs out of battery. If I only used my phone for payments and it died while I was out, I would be screwed. Can't call a friend, can't pay for transit, I guess I'm walking for hours to get home? Since I use the card to pay, if my phone dies, the worst thing that happens to me is I might need to look at a physical map to figure out which train to take home.
I will spare saying the obvious illegality of such actions and how serious this is.
I will just say something else: I grew up as a kid between the 80s and 90s, when the world felt like it was going towards a brighter age of peace and respect. Berlin wall falling, China opening, Apartheid ending in South Africa, even Palestine and Israel were moving towards a more peaceful future.
But since then the world has just progressed toward darker and darker ages.
General public not caring anymore about any tragedy, it's just news, general public being fine with their press freedom being eroded, journalists being spied and targeted, more and more conflicts all around.
I just don't see nor feel we're heading where we should considering how developed and rich we are.
We should boast in how well we raise our kids, how safe and healthy our cities are, but it's nothing but ego, ego, money and money.
Competition. In the first half of the 90s Windows faced a lot more of it. Then they didn't, and standards slipped. Why invest in Windows when people will buy it anyway?
Upgrades. In the first half of the 90s Windows was mostly software bought by PC users directly, rather than getting it with the hardware. So, if you could make Windows 95 run in 4mb of RAM rather than 8mb of RAM, you'd make way more sales on release day. As the industry matured, this model disappeared in favor of one where users got the OS with their hardware purchase and rarely bought upgrades, then never bought them, then never even upgraded when offered them for free. This inverted the incentive to optimize because now the customer was the OEMs, not the end user. Not optimizing as aggressively naturally came out of that because the only new sales of Windows would be on new machines with the newest specs, and OEMs wanted MS to give users reasons to buy new hardware anyway.
UI testing. In the 1990s the desktop GUI paradigm was new and Apple's competitive advantage was UI quality, so Microsoft ran lots of usability studies to figure out what worked. It wasn't a cultural problem because most UI was designed by programmers who freely admitted they didn't really know what worked. The reason the start button had "Start" written on it was because of these tests. After Windows 95 the culture of usability studies disappeared, as they might imply that the professional designers didn't know what they were doing, and those designers came to compete on looks. Also it just got a lot harder to change the basic desktop UI designs anyway.
The web. When people mostly wrote Windows apps, investing in Windows itself made sense. Once everyone migrated to web apps it made much less sense. Data is no longer stored in files locally so making Explorer more powerful doesn't help, it makes more sense to simplify it. There's no longer any concept of a Windows app so adding new APIs is low ROI outside of gaming, as the only consumer is the browser. As a consequence all the people with ambition abandoned the Windows team to work on web-related stuff like Azure, where you could have actual impact. The 90s Windows/MacOS teams were full of people thinking big thoughts about how to write better software hence stuff like DCOM, OpenDoc, QuickTime, DirectMusic and so on. The overwhelming preference of developers for making websites regardless of the preferences of the users meant developing new OS ideas was a waste of time; browsers would not expose these features, so devs wouldn't use them, so apps wouldn't require them, so users would buy new computers to get access to them.
And that's why MS threw Windows away. It simply isn't a valuable asset anymore.
Just for what it's worth, I tried to explain the context and the historical importance when I wrote about the original discovery of the tape, and about the recovery:
It is us, developers, who convinced our management to purchase GitHub Enterprise to be our forge. We didn't pay any heed to the values of software freedom. A closed source, proprietary software had good features. We saw that and convinced our management to purchase it. Never mind what cost it would impose in the future when the good software gets bad owners. Never mind that there were alternatives that were inferior but were community-developed, community-maintained and libre.
The writing is in the wall. First it was UX annoyances. Then it was GitHub Actions woes. Now it is paying money for running their software on your own hardware. It's only going to go downhill. Is it a good time now to learn from our mistakes and convince our teams and management to use community-maintained, libre alternatives? They may be inferior. They may lack features. But they're not going to pull user hostile tricks like this on you and me. And hey, if they are lacking features, maybe we should convince our management to let us contribute time to the community to add those features? It's a much better investment than sinking money into a software that will only grow more and more user hostile, isn't it?
All the issues basically boil down to "nobody wants to do the busywork of CVE filtering, triage, rejections, changes".
As a developer, kernel or otherwise, you get pestered by CVE hunters who create tons of CVE slop, wanting a CVE on their resume for any old crash, null pointer deref, out of bounds read or imaginary problem some automated scanner found. If you don't have your own CNA, the CVE will get assigned without any meaningful checking. Then, as a developer, you are fucked: Usually getting an invalid CVE withdrawn is an arduous process, taking up valuable time. Getting stuff like vulnerability assessments changed is even more annoying, basically you can't, because somebody looked into their magic 8ball and decided that some random crash must certainly be indicative of some preauth RCE. Users will then make things worse by pestering you about all those bogus CVEs.
So then you will first try to do the good and responsible thing: Try to establish your own criteria as to what a CVE is. You define your desired security properties, e.g. by saying "availability isn't a goal, so DoS is out of scope", "physical attacker access is not assumed". Then you have criteria by which to classify bugs as security-relevant or not. Then you do the classification work. But all that only helps if you are your own CNA, otherwise you will still get CVE slop you cannot get rid of.
Now imagine you are an operating system developer, things get even worse here: Since commonly an operating system is multi-purpose, you can't easily define an operating environment and desired security properties. E.g. many kiosk systems will have physical attackers present, plugging in malicious hardware. Linux will run on those. E.g. many systems will have availability requirements, so DoS can no longer be out of scope. Linux will run on those. Hardware configurations can be broken, weird, stupid and old. Linux will run on those. So now there are two choices: Either you severely restrict the "supported" configurations of your operating system, making it no longer multi-purpose. This is the choice of many commercial vendors, with ridiculous restrictions like "we are EAL4+ secure, but only if you unplug the network" or "yeah, but only opensshd may run as a network service, nothing else". Or you accept that there are things people will do with Linux that you couldn't even conceive of when writing your part of the code and introducing or triaging the bug. The Linux devs went with the latter, accept that all things that are possible will be done at some point. But this means that any kind of bug will almost always have security implications in some configuration you haven't even thought of.
That weird USB device bug that reads some register wrong? Well, that might be physically exploitable. That harmless-looking logspam bug? Will fill up the disk and slow down other logging, so denial of service. That privilege escalation from root to kernel? No, this isn't "essentially the same privilege level so not an attack" if you are using SElinux and signed modules like RedHat derivatives do. Since enforcing privileges and security barriers is the most essential job of an operating system, bugs without a possible security impact are rare.
Now seen from the perspective of some corporate security officer, blue team or dev ops sysadmin guy, that's of course inconvenient: There is always only a small number of configurations they care about. Building webserver has different requirements and necessary security properties than building a car. Or a heart-lung-machine. Or a rocket. For their own specific environment, they would actually have to read all the CVEs with those requirements in mind, and evaluate each and every CVE for the specific impact on their environment. Now in those circles, there is the illusion that this should be done by the software vendors, because otherwise it would be a whole lot of work. But guess what? Vendors either restrict their scope so severely that their assessment is useless except for very few users. Or vendors are powerless because they cannot know your environment, and there are too many to assess them all.
So IMHO: All the whining about the kernel people doing CVE wrong is actually the admission that the whiners are doing CVE wrong. They don't want to do the legwork of proper triage. But actually, they are the only ones who realistically can triage, because nobody else knows their environment.
> FreeBSD was also picky with its hardware choices. I'm not sure that the hardware I had would have been fully supported
Copy / paste of my comment from last year about FreeBSD
I installed Linux in fall 1994. I looked at Free/NetBSD but when I went on some of the Usenet BSD forums they basically insulted me saying that my brand new $3,500 PC wasn't good enough.
The main thing was this IDE interface that had a bug. Linux got a workaround within days or weeks.
The BSD people told me that I should buy a SCSI card, SCSI hard drive, SCSI CD-ROM. I was a sophomore in college and I saved every penny to spend $2K on that PC and my parents paid the rest. I didn't have any money for that.
The sound card was another issue.
I remember software based "WinModems" but Linux had drivers for some of these. Same for software based "Win Printers"
When I finally did graduate and had money for SCSI stuff I tried FreeBSD around 1998 and it just seemed like another Unix. I used Solaris, HP-UX, AIX, Ultrix, IRIX. FreeBSD was perfectly fine but it didn't do anything I needed that Linux didn't already do.
I always saw it as two different mindsets for data storage.
One vision is "medium-centric". You might want paths to always be consistently relative to a specific floppy disc regardless of what drive it's in, or a specific Seagate Barracuda no matter which SATA socket it was wired to.
Conversely it might make more sense to think about things in a "slot-centric" manner. The left hand floppy is drive A no matter what's in it. The third SATA socket is /dev/sdc regardless of how many drives you connected and in what order.
Either works as long as it's consistent. Every so often my secondary SSD swaps between /dev/nvme0 and /dev/nvme1 and it's annoying.
I very much agree.
A lot of tech job growth during the late 2010 and pandemic period were frankly BS for a ROI perspective. Late 2010s was really the first time in tech that I started to feel like most of the stuff that needed to be built was built, and increasingly I was working on BS projects offering less and less value every year.
Consider:
- In the 80s developers were needed to write fundamental business software for word processing and spreadsheets
- In the 90s computers became mainstream and there was a huge demand for consumer software
- In the 00s the internet took off and we needed people build the web
- In the 10s the smart phone revolutionised computing and we needed people to build apps and rebuild websites to be mobile-first
But towards the late 10s entrepreneurs and investors seemingly ran out of no-brainer tech investments so increasingly started trying mental stuff still promising tech-like returns – block-chain, metaverse, Web 3.0, [insert traditional industry here] but a tech company.
I'm not saying there's nothing to build or maintain anymore, but I also no longer see where people think the exponential need for new software and software developers could come from, and I suspect this would have become obvious earlier if it wasn't for ZIRP.
But it's not a lack of productive things to build. We also have other trends hurting demand for new SWEs today. Consider how today completely non-technical people can start and scale an ecommerce company without any developers. Things that would have taken armies of developers just 10-15 years ago, can now be largely done in an afternoon on platforms like Shopify. It's actual hard to believe that just 15 years ago selling things online used to be very hard if you weren't technical.
Similarly starting in the early 2010s even being a developer got significantly easier because increasingly there was packages for everything. Things I might have spent weeks building before could now be built in days or less. And another thing that changed was sites like stackoverflow and blogs which help you solve problems and learn new skills. I remember trying to learn how to do things before 2010s was hard, and before the 00s it very hard.
And of course now we also have AI coding tools which don't just hurt the overall demand for developers, but effectively expands the supply of developers to anyone with an internet connection and computer.
So to summerise:
- There's much fewer good investments to be made in new software today.
- Where there are investments to be made you need far less developers.
- When you need developers there's far more people who can do the job.
Even if tech companies are doing well and the number of tech jobs is increasingly, the above means the average person trying to find a job in tech today will find it much, much harder than they have in the past. People working in tech today genuinely should consider a career change if they're primarily in tech for the money.