Among other things, the MacBook batteries are larger than the iPad batteries. Power consumption of the screen other components will obviously be different, but I would imagine that the MacBook Pro having a battery ~2x as large as that of the smaller iPad Pro would make a big difference.
Anyhow it is just annoying and they broke NPM Audit based on these reports.
It is good to fix all possible bugs, but many of these are not anywhere close to the level of bad that the reports are making them to be.
But maybe this is needed to just get rid of these issues in genera? So a wave of regex vulnerability reports and then we build this type of checking into prettier or similar and we do not have these in the future?
EDIT: It appears there as a project that found 100s of CVE reported Regex vulnerabilities in npm projects -- this is maybe one of the sources of mass reports. See the bottom of this resume: https://yetingli.github.io
I'm a maintainer of a few of the larger packages on npm. This is generally pretty accurate. Snyk Security seems only to find regex DoS bugs and I'm a bit disappointed in them being classified as high severity, and they're the only ones submitting reports right now.
They seem pretty adamant on filing CVEs despite what the owner says (It's normally fine but these DoS vulns require very large input to be handed into the function by untrusted sources, which given how these libraries work isn't going to be very common).
Now, I have people yelling at me about dependent packages not being updated because they don't understand version ranges, or because some audit states they are high vulns, or whatever.
Super broken, everything related to npm's package lock stuff is broken by design. I've been saying it for years now and it seems people still cling to blindly trusting what corporations say.
> Super broken, everything related to npm's package lock stuff is broken by design. I've been saying it for years now and it seems people still cling to blindly trusting what corporations say.
Because this isn't true. Just because you're experience this effect (which blows), doesn't mean the tool and related tooling are somehow broken. These Regex issues should be fixed, libraries should update to safe versions, things should advance and any incentive we have we should use to make this happen.
I think it helps to inform the developer about possible issues, but I think in most cases depending on the software this is plainly not relevant and can be ignored. I wouldn't classify it has high severity. Also, It might just not be trivial to develop a regex library that cannot be DDOSed or the mechanism that was declared a vulnerability.
Might be nice to be able to tag libraries that should be ignored in audits. Perhaps there is such a function, not really a NPM expert. But if your projects has too many of these "high severity" problems, you probably stop doing them.
Still, I think the availability of such audits from the package manager is quite neat. As an embedded dev I think these are certainly luxury problems.
A bit pretentious to imply you're better than everyone else at writing regular expressions, so much so that you'd never write one that had exponential time/space complexity on large inputs.
Or do you just not understand what regex DoS vulnerabilities are?
Either way, you come across very foul and condescending in this comment.
> Just because you're experience this effect (which blows), doesn't mean the tool and related tooling are somehow broken.
I've been in the node scene since 0.10. That's around 10 years. My packages have billions of downloads annually. My viewpoint here carries the weight of hours of debug time and frustration and confused users of my code, as well as meeting and knowing the npm staff at the time quite personally, and knowing under which circumstances package lock files were implemented.
They are broken.
> These Regex issues should be fixed
They do, pretty much immediately after they're reported.
> libraries should update to safe versions
I check all the version ranges of dependent libraries when I push a patch with vuln fixes. They get pulled just fine without needing to update every single package. This is what version ranges are for.
> things should advance
Yes but this is nebulous and vague and aside from the point.
> and any incentive we have we should use to make this happen.
I don't see where the disagreement is. This is exactly what happens all the time, nothing is the problem here. I don't get your point.
---
Package lock files were designed in a few short days and pushed out prematurely without much review by a single Npm employee (at the time) since they promised it for the v5 release. They were on a time crunch because they were trying to keep with Node.js's next major release timeline, which operates independently of npm's (at least, that's how it was conveyed to me).
So this change got pushed out, had an absolute mountain of bugs that took ages to fix (e.g. at one point adding a new dependency would delete your entire node_modules folder), and promised added security when in reality they do nothing of the sort.
Instead, they cause subtle caching-related bugs, they add an artifact to source control (which is always code smell in my book), crap up diffs/PRs, cause headaches across platforms, and do very little to help... anything, really.
They're super, super broken by design. Yet npm tells you you need them ("please commit this to your repository") and refuses to do basic security things without them (npm audit).
So why were they added? IIRC it was because the version resolution was a massive strain on npm's servers, so lockfiles removed the need to fetch tons of version information each time you added another dependency.
Oh, and don't even begin to whine about them on Twitter (at the time), lest you be yelled at by the implementor for being ignorant or something.
It was a shit show. They add absolutely nothing to the industry.
Your builds are not reproducible with anything related to npm. Neither npm nor any bundler that I'm aware of guarantees that.
Unless we're not talking about the same reproducibility property. Also I don't know what "hermetic" means in this context but I doubt it's anything that npm solves correctly.
There is a way, but it's troublesome. Create a docker image with installed node modules. Save it, and from then onwards you have frozen node modules. If you need a new dependency/updated version you need to create a new image and npm i.
That's absolutely no different than just installing and not re-installing. Docker adds nothing in this case.
Not re-building doesn't make your build reproducible. It just means that you're... not building. If I save the result of a single iteration of an RNG, I can't claim that the RNG always produces the same result because I saved the result somewhere...
Where did they say they’re not building? Building your app does not mean you install the modules every time. Some apps are so large they have to be split into chunks / layers anyway. In golang this used to be the way you’d add deps, check the entire source into your version control.
The code being the same != reproducible. Build tools can incorporate e.g. build timestamps into the built artifacts, or randomize the output for e.g. pattern scanning/patch deterrence.
The input is irrelevant. I think you should have a look at what reproducible builds really are before evangelizing them.
> A reproducible build means anybody on any machine can build the same thing someone else has on theirs. That’s it.
No. A reproducible build is a guarantee that two builders of the same codebase, or the same codebase built multiple times, will result in a bit-for-bit identical of all other builds of the same codebase, every time, guaranteed.
*There are no Node.js-related build systems in mainstream use I am aware of that have any such guarantees. No, docker does not make any such guarantees. No, just because you pinned dependencies does not make that guarantee. No, just because you archived the codebase and vendored your dependencies does not make that guarantee.*
Please educate yourself before dying on a hill for a topic you're misrepresenting entirely.
> No. A reproducible build is a guarantee that two builders of the same codebase, or the same codebase built multiple times, will result in a bit-for-bit identical of all other builds of the same codebase, every time, guaranteed.
That's what I said :)
> Please educate yourself before dying on a hill for a topic you're misrepresenting entirely.
I'll say the same. I've only been doing this for near 30 years ;)
But here you go, here's one example:
1) copy source to destination directory
2) run private npm
3) use private npm repo
4) freeze private npm repo
5) use npm install like normal
here's another:
1) check all node_modules directories into version control
2) ensure no native packages are used
3) copy entire directory structure to destination dir
You are clearly inexperienced, or very focused on node.js only.
> I've been in the node scene since 0.10. That's around 10 years. My packages have billions of downloads annually. My viewpoint here carries the weight of hours of debug time and frustration and confused users of my code, as well as meeting and knowing the npm staff at the time quite personally
So when you say "npm staff at the time", do you mean at the time of node 0.10?
> and knowing under which circumstances package lock files were implemented.
> Package lock files were designed in a few short days and pushed out prematurely without much review by a single Npm employee (at the time)
The amusing thing about your comment here is the parts which are accidentally correct.
`package-lock.json` files use the same file format as `npm-shrinkwrap.json` files. Always have, although of course the format of this file has changed significantly over the years, most dramatically with npm 7.
The "design" of the shrinkwrap/package-lock file was done rather quickly, since it was initially just a JSON dump of (most of) the data structure that npm was already using for dependency tree building. However, as far as I know, the days were the standard length of 24 hours, so while that may be "short", certainly shorter than I'd often prefer, they were (as far as I know) no shorter than any other days.
This was indeed shipped without any review by even a single "npm employee", which should not surprising, as "npm" was not at that time a legal entity capable of hiring employees. The initial work was done by Dave Pacheco, and reviewed by npm's author (at that time its sole committer and entire development staff), both of whom were Joyent employees at the time.
The use of a shrinkwrap as a non-published normal-use way to snapshot the tree at build time and produce reproducible builds across machines and time was not implemented by default until npm v5, but there wasn't really much to rush, on that particular feature. You could argue that npm 5 itself was rushed, and that's probably a fair claim, since there was some urgency to ship it along with node version 8, so as not to wait a year or more to go out with node v10.
> So this change got pushed out, had an absolute mountain of bugs that took ages to fix...
Idk, I think calling it a "mountain" is relative, actually ;)
> They're super, super broken by design.
I know you're using this phrase "broken by design" in the same sense as the author of the OP means it, but... has language just changed on me here, and I didn't notice?
As I've always heard the term used, something is "broken by design" when the actual intent is for a system to fail in some way, to achieve some goal. For example, a legislative or administrative process that is intentionally slow-moving and unable to accomplish its goals in a reasonable time frame, with the hope that this leaves room for independent innovation. Or a product that requires some minor upgrade or repair to continue working, so that the seller can keep tabs on their customers more easily. That kind of thing.
I think what you mean is not that it's "broken by design", but rather it's "a broken design". Unless this is like "begging the question", and I should just accept that I'm gradually coming to speak a language of the past, while the future moves on. It's certainly not intended to cause problems, as far as I'm aware.
If you really do mean "broken by design" (in the sense of a tail light that goes out after 50k miles so that you will visit the dealership and they can sell you more stuff), I'm super curious what you think npm gets out of it.
> Yet npm tells you you need them ("please commit this to your repository") and refuses to do basic security things without them (npm audit).
As of npm v7, there's no longer any practical reason why it can only audit the lockfile, rather than the actual tree on disk. Just haven't gotten around to implementing that functionality. If you want it changed, I suggest posting an issue https://github.com/npm/cli/issues. There's some question as to whether to prioritize the virtual tree or the actual tree, since prioritizing the actual tree would be a breaking change, but no reason why it can't fall back to that if there's no lockfile present.
But even approaching build reproducibility is impossible without lockfiles. If a new version of a transitive dependency is published between my install and yours, we'll get different package trees. If we both install from the same lockfile, we'll get the same package tree. (Not necessarily the same bytes on disk, since install scripts can change things, but at least we'll fetch the same bytes, or the build will fail.)
> So why were they added? IIRC it was because the version resolution was a massive strain on npm's servers, so lockfiles removed the need to fetch tons of version information each time you added another dependency.
You do not recall correctly, sorry. (Or maybe you correctly recall an incorrect explanation?) The answer is reproducible builds. Using a lockfile does reduce network utilization in builds, but not very significantly.
> Oh, and don't even begin to whine about them on Twitter (at the time), lest you be yelled at by the implementor for being ignorant or something.
I hope my tone is civil and playful enough in this message to not consider my response "yelling".
A regex "denial of service" "vulnerability" could be important, if it shows up in code that processes untrusted input from end users.
But NPM Audit has no idea of context-- a "critical" bug in `browserlist`, which, in this context, is never used outside the development process and never takes input outside of what's in my package.json, gets the same prominence (or more so, since it's early in alphabetical order) as a "critical" bug in Express, potentially allowing my server to be compromised.
I'm not really sure what the solution is here; NPM's just a package manager and doesn't know how you're using a given package. A simple heuristic distinguishing development dependencies and runtime dependencies in NPM Audit might be a start, but that doesn't help with situations like create-react-app's react-scripts where everything, runtime or dev dependency, is a transitive dependency of one package declared as a runtime dependency.
A “Critical” bug in a dev context should mean something very different from a “Critical” bug in a prod context. A “Critical” devDependency bug should be either a direct threat to the developer’s context, either by infecting the dev machine or by injecting a supply-chain problem, worming it’s way into downstream contexts.
npm audit is just not granular OR careful enough to address these issues appropriately.
Would be nice if package.json had a flag to indicate the runtime would be either Node.js or a browser. So many of these "bugs" have no bearing in a browser context.
The package.json should be able to actively ignore vulnerability id's. As id's disappear with audits the npm audit could just remove those, eg a "prune"
IMHO one solution would be to categorize vulnerabilities separately for prod dependencies and dev dependencies, and bubble that categorization up.
For example, a RegEx DDoS vulnerability in Express would show up as high severity, while the same would not show in the bundler you use, or any package that your bundler has in its dependency tree.
Other developers have no idea of context either. Unless you have a way of enforcing that certain code is never exposed to user input (and I agree that a build-time-only dependency does solve that), sooner or later it will be.
Accepting regexes from user input is a really insidious class of bug that can go undetected for years. I've seen real outages caused by it, so it's absolutely worth doing something proactive about.
True story, the npm registry was once taken down (not maliciously, just by accident) by a ReDOS in node-semver. That was extra fun to debug because the failure happened inside of CouchDB.
> A regex "denial of service" "vulnerability" could be important, if it shows up in code that processes untrusted input from end users.
But in this context what's the end result? Chrome locking up on the end user's (attacker's) machine? Again, an "attacker" doesn't have access to the source code for distribution. By inputting bad regexp data they're only DOSin themselves, no?
Beautifully succinct. This quote: "Grey-hat hackers are rewarded based on the number and severity of CVEs that they write. This results in a proliferation of CVEs that have minor impact, or no impact at all, but which make exaggerated impact claims." Alignment of incentives is messed up. Goodhart-Strathern's and Campbell's laws apply.
Sounds like academic research publications. Sure, that will totally be a key step toward cancer therapy or better biofuels (realistically, the PI gets his jollies by shoving aldehyde groups onto random molecules)
Oh, you mean like the guys who tried to inject vulnerabilities into the linux kernel and got their entire university on Greg Kroah-Hartman's shit list? https://news.ycombinator.com/item?id=26887670
Maybe then writing and submitting a CVE should cost some money that’s payed back together with the reward if the vulnerability is found to be „reasonable“ upon review?
I'm always suspicious of just throwing money at a problem, particularly in things like open source where money isn't always the motivator and can often be a corrupting influence. In some cases this will reduce the ability of genuinely well-intentioned people to participate, simply because they don't have the money up front, and for well-funded organizations the money would have to be quite a lot.
I'd like to ask what, other than money directly motivates people? Is it prestige? A line on their resume? A requirement for a bootcamp class? In addition, we should re-evaluate the difficulty of submitting a CVE. Is it too easy? The story about a mass of "hey your regex parser could choke on this weird expression[1]" reports suggest that perhaps so. What can we do to make it so that CVEs and equivalents are truly meaningful? Also, just the fact that CVE reports are given a great deal of respect could be the problem, although at this point that seems to be self-correcting.
[1] Some classes of regex parsers are known to be vulnerable by nature, those that do backtracking for example, because their worst-case runtime grows exponentially and can run in unbounded time. This has been known since at least 2009. There are other implementations with better worst-case runtimes, but worse performance in typical cases. The fact that it's trivially easy to look at a regex parser to see if it does backtracking and construct an "evil" expression that breaks it means it's trivially easy to file a DOS report against any such parser.
AFAIK MITRE has a process for an organization to register as vendor, and then it would accept CVEs for their products only from the vendor, not from random people. Of course this has an opposite failure mode that may have unscrupulous vendors hide issues or just be lazy in issuing CVEs for existing bugs, but it eliminates the problem of random people issuing a ton of CVEs for non-issue bugs.
I'm pretty sure CVEs and the like came about because vendors were choosing to hide or deny security vulnerabilities. Vulnerability disclosure policies are a whole different kettle of worms.
I had a researcher contact me about a "vuln" in an OSS effort of mine once. The vuln made no sense w/ how the tool was used, but they published and I earned a CVE scarlet letter nonetheless. I finally "fixed" it, but IMHO, nothing was ever broken or vulnerable.
I wouldn't call a CVE a scarlet letter. Given the current state of software engineering, it's more like "my project is valuable enough to be used by someone that cares about security". You fixed it, one less bug to worry about. No doubt there are many less popular products with many worse vulnerabilities that don't have a CVE.
Even OpenBSD had to change their tagline to "Only two remote holes in the default install, in a heck of a long time!" (from "Five years without a remote hole in the default install!") Still a pretty impressive track record.
Those "bugs" can be features though - or the work involved to fix the bug meant that high-impact feature work - or other bugfixes, had to be postponed or even cancelled.
Our SaaS frequently gets security "researchers" (read: people running online scanners) submitting emails through our contact-form informing us about click-jacking attacks on our login-page - the problem for us is that we have a lot of second-party and third-party integrations on unbounded origins that offer access to our application, and by extension our login-screen through an <iframe> on their own origin, which is sometimes even an on-prem LAN web-server accessed through embedded devices where we can't use popups to do it properly - let alone switch to a more robust OIDC system - so there is no easy solution that makes the "I ran a tool, gimme $100" people go-away without causing a much bigger problem to now exist.
I wouldn't take Greg's opinions on security too seriously.
Spender has a much more nuanced, informed view. I think it covers the issues of the CVE process well, but doesn't make the same mistakes that Greg does.
The more I work with parsing, parser combinators and writing grammars for little languages, the less often I find myself using or wanting to use any regex at all. When I do, I always feel like there should be a better way, perhaps a type safe way of accessing the info I need and so on. It feels "Ugh, there should be a better way to do this." Especially in JavaScript, regexes blow in comparison to languages with named matching groups and all that. In JS regex really feels horrible, even more cryptic than in other languages.
I think regexes are often used as a quick and dirty solution to problems, which should be solved differently. But once the regex "works" and is in place, others begin to rely on that output. Over time cruft begins to accumulate and the regex is forgotten or at least never replaced with anything more appropriate.
> The more I work with parsing, parser combinators and writing grammars for little languages, the less often I find myself using or wanting to use any regex at all.
Surprise: The most common parser combinator libraries do backtracking. That's exactly the problem. Any solution as widely used (if not overused) as regular expressions ends up exposing a number of dark corners where the design isn't as clean and tight as you would want it. There are lots of better ways, but most of them are specialized and are totally unsuited for significant areas where people need something.
That said: yes I've used LR(1) parsing (not LALR) using a library that uses parser combinators with a good interface, and it's more powerful than regex and worth it for the right usecase.
We got bit by this last week; our scans were suddenly all red, and nobody could deploy to production. We had to write an analysis of why this wasn't actually dangerous to us in order to get security to suppress the findings.
Having 20 supposeddly high risk issues about possible DoS when all come from a build dev-dependency is just totally useless. If only I could add whitelists, then it would be bearable. Like "I don't care about such kind of issue in a dev dependency".
Isn't this an area where gamification and machine learning could actually be useful, if applied carefully?
If people are competing for CVEs, then why not work out a way to better differentiate them them through scoring and make this visible. The goal would be for attention to shift to the scoring instead of only a CVE count. Offer both views of the world, so tools could still fall back on the problematic listings they get today.
Apply machine learning to classify CVEs based on the reputation of the reporter, blast radius, or other criteria. Use that to drive community review and scoring.
I would not see this a panacea because it brings a lot of challenges (a la StackOverflow), but it would be much better than what we have today.
We're kind of already doing scoring in that CVEs are usually graded on severity, but researchers are motivated to inflate the severity of CVEs they find. So the question you'd need to tackle is how does one apply a universal standard to measure the real impact of a CVE?
I suspect it's an impossible challenge, but I only dip into this domain casually so maybe someone has better ideas.
I'm not making the claim it's a universal standard, but there are likely indications that some researchers are a different pedigree from others. A researcher reporting the same kind of low grade vulnerability probably shouldn't carry the same reputation score as other researchers.
I don't think there is a perfect way to do this, and I don't think there is an absolute standard that can be applied. It will be unfair to some people, but the system should have options for resolution when there are egregious mistakes. I'm not making the claim either, that the views of the data you are interested in are the ones I might be interested in. A good system would provide some different levels which itself is an incentive towards better research that would break through.
I'm more inclined to think the better solution would be to stop issuing CVEs for trivial "exploits" like Regex DOS, unless there's actual demonstrated uses of the exploit.
Neither is designed for a CAD engine. CAD uses generally NURBS and a scene graph based around constraints. Game engines are not constraint based but rather about independent behaviour agents interacting using triangle meshes. Quite different.
For NURBS modelling there is OpenNURBS [1], made by the guys who made Rhinoceros 3D, an excellent NURBS 3D modelling program. However, although Rhino is available on Windows and macOS, it is not available on Linux and the authors seem pretty against making a Linux version. OpenNURBS is only available on Windows.
I desperately want open source CAD to be a thing. I would suppose that Freecad is probably a very good starting point. I don't think the lack of good open source CAD is due to a lack of a decent codebase so much as the fact that good CAD software is a very large project and we have just not directed enough efforts towards it. I am speculating however and may be wrong.
Rather than "just" Open source CAD I would like to see Open Source CAD-like applications. Well, at least in my mind CAD is something kind of specific and carries a lot of design assumptions with it.
I feel there is a lot of cool stuff you can build in the space of building shapes using constraints. Like visual programming languages made to visualize mathematical relationships using geometry.
Interesting. In my case I am specifically talking about mechanical design CAD for engineering. We have a huge need for that in my opinion. Freecad might be the closest thing but I suspect it needs more investment.
However what you’re talking about sounds important as well.
> I don't think the lack of good open source CAD is due to a lack of a decent codebase
I think the problem is exactly that. A good constructive solid geometry (CSG) is a difficult thing to make and there are no open source alternatives to the ones powering the mainstream CAD packages.
Closest thing would be a level editor, which is very much a CSG-based CAD program with roots going back to the Quake 1 generation. It's just not geared towards typical production applications where real objects are being machined.
It's hard to avoid the segfaults. I've tried to use it many times for simple projects. Some workflows it's every 10min or so, and often you get into corrupted states where you can tell it's about to happen, but of course if you save you risk your files completely.
It's also slow, onshape in my browser beats it at roughly everything.
Article says: "Over the last couple of years, the Israeli Defense Forces (IDF) have been using AI and supercomputers to identify locations of Hamas activity and plan strikes to remove any strategic advantage."
Does this work well? Or does Israel not care about civilian deaths in Gaza? Here is the NYTimes article on who Israel has been killing in Gaza:
Unfortunately it fights wars in order to maintain its apartheid like domination over “others.” This tends to invalid any possible moral stance it may have or pretend to have.
MSP are so often the security vulnerability themselves these days, rather than being a security benefit. This isn't the first time this has happened and won't be the last.
How many people are affected this time?
SolarWinds Orion exploit was the basis of the US government hack. Kesaya here is ransomware. Is ConnectWise next?
If this happened to Trump it would be on all of the right wing news channels for a full news cycle and used to discredit all investigations/charges against him.
Different people have different levels of access to ensuring media coverage, in part because of biases in those media outlets.
This is just the larger question of AI personhood. If/when we get more advanced AI, especially if it shows some type of self-awareness, even if programmed in, this will be likely a rights issue/movement similar to what we've seen in the past for previously marginalized groups.
> How long will it be between the introduction of lab-grown meat until killing animals for food is banned? Is this a generational thing (so older people still consider eating animals to be acceptable, while younger people do not), or is it a country-wide thing (countries move to ban eating animals one by one, driven by a general shift in moral attitude), or what? How does this change in morality propagate through society?
Just because they ate meat? Maybe not, but they may not look kindly on those that did large scale killing with less than humane practices. I could see this individuals being more likely to be "Cancelled."
Maybe those that fought vegans or fought against animal rights activities?
Those opposing change for the better or were active participants in things we in the future see as bad.
Remember that most people cancelled were active participants, rather than passive bystanders going with the flow.
True. We tend to cancel slave traders rather than slave owners, because if we were to cancel the slave owners then we'd have to cancel a lot of people.
I should clarify. I'm not criticising cancel culture. I'm interested in how the shift in morality happens and how it propagates, and what that means for us now. Is morality retroactive? Am I an immoral person because I eat meat, even though it's common practise now?
I wouldn't be surprised if this nuance were lost in the hypothetical future cancel culture, but I think anyone giving careful consideration to present circumstances would realize the terrible choice we all have to make.
It's easy to judge someone for complicity in the mass murder of countless animals when you have the luxury of having been saved from that choice by more advanced technology.
>I wouldn't be surprised if this nuance were lost in the hypothetical future cancel culture, but I think anyone giving careful consideration to present circumstances would realize the terrible choice we all have to make.
It already is lost among many people today so that's not a stretch in the slightest. There are an egregious amount of people that use the morality of the present to judge the decisions of the past when the whole system and culture was different. All simply because these people were spoon-fed and told "this is bad because of 'x'" without explaining from a mindset of why people thought the way they did at the time. I mean the simple fact that there is a concept of "leftism" is enough to state that there is an ideology that someone can measure you against on how "woke" you may or may not be. It's exactly no different than the Catholic church setting up a mock trial to determine if you are a heretic! Except it's not a centralized theology. But clearly it has enough adherence that there are some aristocrats that are willing to kowtow to the belief system.