> Complete overkill requiring the use of a YubiKey for key storage and external RNG source - what problems does solve? For a Yubikey to act as a poor man's HSM you have to store the PIN in plaintext on the disk
You still can't exfiltrate the key material.
> If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.
Just because someone has compromised your device doesn't mean they have physical access. That's the point.
> They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
The traces don't have to be left behind. Is this excessive 'overkill', or is the 'digital duct taping the windows and doors' insufficient?
> An instance of openssl or xca covers 99.9% of "homelab" use cases
The interesting thing about this article is that it adds a few 9's that are covered, and it's both easy and cheap.
And? What actual problem does this solve or realistic threat does this prevent? They are not decryption keys they are used to digitally sign certificates.
What the DigiNotar hack taught us years ago is if your CA is compromised you are already 0wned doesn't matter if the key is stored in an HSM or not.
All they can do with a stolen key is issue more certificates. Which they can do anyway if they have root access to the CA.
You can put 12 locks on your door but if they're all keyed to the same key you've stored under the plant on the porch, it doesn't really matter.
> The interesting thing about this article is that it adds a few 9's that are covered, and it's both easy and cheap.
Hard to say if those extra 9's need an external RNG for extra entropy.
> Which they can do anyway if they have root access to the CA.
Until you turn it off. If they exfiltrate the keys, it's more complicated.
This goes back to your comment:
> Creates a two-tier PKI... on the same device. This completely defeats the purpose so you can't revoke anything in case of key compromise
But the root key is just created; it doesn't stay on the device and can't be used to sign anything.
> What actual problem does this solve or realistic threat does this prevent?
The problem is exfiltrating the key without physical access. Whether or not that's "realistic" enough to matter isn't a question that can be answered generally.
> Hard to say if those extra 9's need an external RNG for extra entropy.
IMO it's not. In the author's words: Optional, but fire
> propose whatever alternative you have to the normie in middle america and watch their blank stares.
We also overestimate how important the web in general is to many 'normies'. It was only a little over 10 yrs ago that I had to convince my wife (20-something at the time) that she had a reason to get a smartphone. We're so far apart on the adoption curve that it's very difficult to understand each other. As generations shift, I expect attitudes about lock-in, privacy, dependency etc will as well.
I could see Microsoft saying "we're only allowing apps installed through our 'store', for safety/security reasons, unless you opt out (gated by some scary warning that doing so is unsafe).
Even if they never charged a fee for running the store, I bet this would raise a lot of eyebrows.
There are ways to do it so that 'bypass' means you effectively wipe the device. If that's not good enough, how do you protect against them just replacing your device with a compromised one that looks similar?
> We don't just need root access, we need undetectable root access.
At some point the argument morphs from 'I should be able to do whatever I want with my device' to 'I should be able to access your service/device with whatever I want'.
The fact that Google allows this shows that
1. Apple could do it with zero security impact on anyone who doesn't opt in
2. They could keep any service-based profit source intact
But they still would never do it. Because it's not only service based profit they want to protect. They want to restrict customers from running competitor's software on their hardware, to ensure they get their cut.
> At some point the argument morphs from 'I should be able to do whatever I want with my device' to 'I should be able to access your service/device with whatever I want'.
I'm not demanding to be able to log in to your service/device and replace IIS with Apache on it. I'm just demanding to be able to access it as a normal user with Firefox instead of Chrome.
When I was in college I worked as an "electronics technician" intern for a few months. We would work through 3-10 page mods to update PCBs from one version to the next. We worked under a scope, and sometimes routed magnet wire all the way across the board, or even thru vias from one side to the other.
It was painstaking work; I had a coworker friend who was a true artist. I never got that good at it, but I sometimes miss that kind of work.
> I'd love to work in a fantasy company that allows for fixing legacy code
You're not supposed to ask. It's like a structural engineer asking if it's okay to spend time doing a geological survey; it's not optional. Or a CFO asking if it's okay to pay down high interest debt. If you're the 'engineer', you decide the extent it's necessary
No. You discuss it with your manager, and you do it at the appropriate time. Having both created, refactored and deleted lots of technical debt over the past 25 years, trust me: you just don't get to go rogue because "you're the engineer". If you do that, it might turn into "you were the engineer".
What if you spend a week or month refactoring something that needs a quick fix now and is being deleted in 1-2 years? That's waste, and if you went rogue, it's your fault. Besides, you always create added QA burden with large refactoring (yes even if you have tests), and you should not do that without a discussion first--even if you're the founder.
Communicate with your manager and (if they agree) VP if needed, and do the right thing at the right time.
> No. You discuss it with your manager, and you do it at the appropriate time.
Sure, if you're not sure if it's the right thing to do, talk to your manager or TL. A good engineering manager can help.
If your manager "would never allow" it, they're not a good manager. Even for jobs much more menial than engineering, a good manager recognizes that autonomy/trust are critical for satisfaction and growth.
If you're working someplace where you're "not allowed" to make the changes you "wish you could," you're doing yourself a disservice. Find someplace where you're not only "allowed," but expected to have (or develop) the judgement required to make these decisions.
To be clear: "the business" expects (and in the medium/long term requires) engineers to make these decisions themselves. That is the job
It's why Scotty was always giving longer estimates that Kirk wanted, but Kirk was also able to require an emergency fix to save the ship.
The estimate was building in the time to get it done without breaking too much other stuff. For emergency things, Scotty would be dealing with that after the emergency.
If your captain is always requiring everything be done as an emergency with no recovery time, you've got bigger problems.
Thats also not applicable in a business setting. If you have multi million line codebase, you simply cant refactor within reasonable time. Also refactoring can cause issues wich then need further fixing and refactoring.
If I touch code that I am not supposed to touch or that does not relate to my direct task I will have HR talks. I'd be lucky to not get laid off for working on things that do not relate to my current task.
The Linux kernel is a multi-million line codebase, and refactoring still happens when needed. Let's not extrapolate from your limited data points as if they are representative for all codebases.
Totally no. Structural engineers also have to consider real life constraints, including cost. We are talking about working with existing structures, it's too late for geological surveys.
That totally depends on the surrounding configuration. If land subsidence was as common as security patches, they would totally be doing monthly surveys.
Structural engineers also don't commonly change structural features after initial delivery; realistically, I would expect changing a two-lane bridge to a four-lane bridge to be more expensive than constructing a four-lane bridge where none exists.
Even if AI gets to the point where I can lazily describe any software project I want and it will make it for me, even better than I described it, I'm 100% sure that nothing will stop me from moving on to other projects short of death/being incapacitated.
I doubt AI will be getting to that point in my lifetime, but if it did, it would be amazing
If AI got to that point in my lifetime I'd quit software engineering. That sounds like a fucking mind numbing job. I'd go be a goat farmer or something before I become a professional chat bot chatter
Most of software engineering (differentiating from programming) is just iterating on a model of a problem space in enough detail that computers can be applied to automate a lot of the drudgery.
Software is nowhere close to done "eating the world". I see endless opportunities to apply software to remove drudgery; anything that helps me do that faster is a win.
Everyone has a level of abstraction they're comfortable with. I enjoy working in C++, and I tolerate working in Javascript for work. I wouldn't be good at writing assembly, and I would dread talking to chatbots to tell them what code to write. I enjoy solving problems, not describing them to chatbots.
IMO it's hardly more of a stretch to say that your C++ compiler is just an obtuse chatbot. If it doesn't complain with template error vomit, you understand its response is 'done'. Otherwise, it's chatting back the errors of your ways.
I can model the execution of a C++ program as it relates to the underlying stack, heap, CPU, etc. in my mind as I'm coding it. I cannot with whatever LLMs do. Anyways, it's just not real engineering to do so. A world in which we spend all day talking to chatbots is simply not a world I want to be a part of
> I can model the execution of a C++ program as it relates to the underlying stack, heap, CPU, etc. in my mind
Fair enough. It's a very approximate model tho; even C++ is operating at a very high level of abstraction.
To me, what's more important is what it does, not how it does it. To use an LLM effectively you absolutely have to understand what it's doing; for software, that's the code it generates and the domain/ecosystem the software operates in.
Conceptually, it's not that different from what senior ICs have been doing for ages, when they delegate to less-senior ICs
You still can't exfiltrate the key material.
> If it's to protect against physical theft of the keys, they'll just put the entire Raspberry Pi in their pocket.
Just because someone has compromised your device doesn't mean they have physical access. That's the point.
> They're generating the private key on disk then importing into the YubiKey. Which defeats having an external key storage device because you have left traces of the key on disk.
The traces don't have to be left behind. Is this excessive 'overkill', or is the 'digital duct taping the windows and doors' insufficient?
> An instance of openssl or xca covers 99.9% of "homelab" use cases
The interesting thing about this article is that it adds a few 9's that are covered, and it's both easy and cheap.