It's not "all the transitive dependencies". It's only the transitive dependencies you need to explicitly specify a version for because the one that was specified by your direct dependency is not appropriate for X reason.
What if libinsecure 0.2.1 is the version that introduces the vulnerability, do you still want your application to pick up the update?
I think the better model is that your package manager let you do exactly what you want -- override libuseful's dependency on libinsecure when building your app.
Of course there's no 0-risk version of any of this. But in my experience, bugs tend to get introduced with features, then slowly ironed out over patches and minor versions.
I want no security bugs, but as a heuristic, I'd strongly prefer the latest patch version of all libraries, even without perfect guarantees. Code rots, and most versioning schemes are designed with that in mind.
Except the only reason code "rots" is that the environment keeps changing as people chase the latest shiny thing. Moreover, it rots _faster_ once the assumption that everyone is going to constantly update get established, since it can be used to justify pushing non-working garbage, on the assumption "we'll fix it in an update".
This may sound judgy, but at the heart it's intended to be descriptive: there are two roughly stable states, and both have their problems.
> an electronic authentication method in which a user is granted access to a website or application only after successfully presenting two or more distinct types of evidence (or factors) to an authentication mechanism.
and concludes with (emphasis mine):
> For the average user, the smartphone has become a single point of failure, where the theft of one device and one piece of knowledge (the passcode) can lead to total financial compromise.
Furthermore, these days I enter the passcode on my phone very rarely (Android requires it after restarting the device or after some amount of time) - normally I use biometric authentication.
The linked WSJ article is a bit hyperbolic and typical journalism overreach by calling it an Apple "security vulnerability", which is bullshit IMO. If you watch the interview with the guy in jail, the main method by which he got people's security code is he asked them. That is, he would tell people he had drugs to sell them and wanted to give them info, so he would get their phone and ask them for their code to unlock it.
At least the WSJ report is honest when it says "The biggest loophole: You".
Also in-person theft is both something our civilisation understands and has adapted to, and it does not scale. So it's never going to be a problem the way say password re-use is or many other maladies from the use of "passwords" for online security.
Compromising the smartphone can let you get the password though, making it one factor. It would be more 2FA if you entered password on one device and used another (Yubikey, physical totp token) as a second factor.
The issue I'm having with this sort of "something you own and something you know/are" two-factor authentication is that it has some potential to cause violence - both can be beaten out of you:
https://www.citizen.co.za/network-news/lnn/article/banking-a...
This is true with 1FA too. 2FA is more effective at stopping the case where you're hacked and you don't even know it because your password was in a leak.
A TAN generator or security key stored in a drawer at home. At least it reduces the opportunities for theft since people don't carry these devices with them all the time as opposed to their phones. Opportunity makes the thief.
Yeah I often think the issue with cash and crypto is that it can be easily forced away from an individual by any sufficienty armed and unscrupulous party. Money in a financial institution tends to have an upper limit on what could be forced away in a single act, or at least a single transction cycle.
Staying anonymous. For every single multimillionaire or billionaire out there flaunting their wealth, there is another who's equally secretive about it. There are many folks with tens of billions in assets who don't make their wealth part of their brand.
Like that guy in Texas whose estate paid billions in tax when he passed away.
If people need "`expect` scripting and a few open source packages [to] automate it to be 1 factor", it is effectively 2 factor for 99.9% of the population.
Also, if someone uses a password manager to store both the password and the OTP credential, that is still an improvement to security. Intercepting (e.g. shoulder surfing) or guessing the password is no longer enough, an attacker needs to get into the password manager's vault.
I mean the unit of review is the patch (set), which does not necessarily have a branch associated with it. You can use a branch, but you could just as easily send commits from master and the reviewer can apply them directly on their master branch if desired. The idea of "branch per reviewable unit" was largely created by GitHub.
> This is excruciating in git if you ever need to make a fix to an earlier PR because you have to manually rebase every subsequent change.
Spreading the word about `git rebase --update-refs` that will automatically update any branches that point to commits along the path (very useful with stacked branches). It is less convenient than what jujutsu offers (you need to know the branches to update, where jujutsu automatically updates any dependency), but still a very useful if you don't want to or can't switch to another tool.
There isn't really any "can't switch to another tool" when it comes to git + jj. You can use it today without anyone on your team knowing - other than that your PRs suddenly became much cleaner
> You need an email_or_error and a name_or_error, etc.
You don't need that. A practical solution is a generic `error` type that you return (with a special value for "no error") and `name` or `email` output arguments that only get set if there's no error.
"Parse, don't validate" is a catchy way of saying "Instead of mixing data validation and data processing, ensure clean separation by first parsing 'input data' into 'valid data', and then only process 'valid data'".
It doesn't mean you should completely eliminate `if` statements and error checking.
> E.g., requiring that a file have the correct MIME type, not be too large, and contain no EXIF metadata.
"Parse, don't validate" doesn't mean that you must encode everything in the type system -- in fact I'd argue you should usually only create new types for data (or pieces of data) that make sense for your business logic.
Here the type your business logic cares about is maybe "file valid for upload", and it is perfectly fine to have a function that takes a file, perform a bunch of checks on it, and returns a "file valid for upload" new type if it passes the checks.
> The problem with convention is that no ten people do it the same way
"Stuff that starts with one underscore is an internal implementation detail, use at your own risk" in Python is as close to a universal convention as you can get.