Yes, it is used, but of course it is not all-inclusive to handle all hardware types out there in the world. That takes actual real-world usage to exercise that, although kernelci is starting to pick up the slack in that area.
For the 99% of people reading who don't know the kernel as well as you (seriously) can you explain why this is the case?
Does a userspace filesystem driver lose a lot of performance to context switching, or is there something unique to filesystems that slows them down in user space?
I use a FUSE filesystem on linux- not ExFAT, but as an S3 interface. I often see 1+Gigabyte/sec throughput over the wire from simple multithreaded IO operations. FUSE can be very fast.
That's correct, my metric for performance is throughput
I don't really think a desktop vs a smartphone matters much at this comparison, it's just my observation that when people say a fuse filesystem is slow it's just not well-engineered, not that the fuse API is inherently slow
It matters a lot. You can squeeze impressive performance from a smartphone, too, but then you need to dump all that waste heat somewhere, plus you need to take battery capacity into account. In other words, in a desktop, power efficiency is commendable, but not critical.
For copying photos off an SD card you're unlikely to notice any difference on any machine made in the past 10 years, except maybe the fans coming on, but try it on a Raspberry Pi class machine, or doing any kind of random IO and oh boy..
The Spectre/Meltdown situation must have made FUSE much, much worse than it was already. I wonder if anyone did any benchmarks for that
Much faster. Also, FUSE has to be installed by the user, which if it's in the kernel, then it Just Works everywhere. With the spec open and the patent license freely granted, the most common USB flash drive format can now Just Work everywhere.
It also removes the requirement of distros packaging it up and users installing and configuring it. Is it's in the kernel it just works out of the box everywhere.
This is one of the first times that I know of that the Linux kernel and Windows kernel developers discussed a security issue together directly. So while the fix was much simpler than Meltdown/Spectre was (Linux was fixed with a patch that was written in 2015) overall, the communication between different OS kernel developers right now is very good.
And yes, it is all due to the horrible Meltdown/Spectre problem and how that was handled. We were not allowed to work together for that problem, and we do not want to that to happen again.
Linux has a security team/list that does handle disclosures and fixing of reported problems. It usually does this by just dragging in the responsible parties, but many times it just fixes the problems themselves and submits the patches to the proper part of the kernel.
>It usually does this by just dragging in the responsible parties, but many times it just fixes the problems themselves and submits the patches to the proper part of the kernel.
I think this is the key - once the patches are submitted they go through the normal workflow and aren't any different from a typical patch, right?
Yes, once the patches are generated, they get submitted just like any other kernel change, sent to the responsible subsystem and maintainer for inclusion like any other kernel change.
I think this is where the problems becomes apparent. The change then has to go through the subsystem's patch flow (in public) then into the merge window (in public) and sit in the release pipeline (in public) until the window closes and all of the rc's are through.
Personally I don't think this is a huge deal, but it's where the disconnect between the security person's ideal worldview and the reality of how Linux is built colllide.
Distros could speed this up. If the security team notifies RedHat, Ubuntu, etc and they can apply the patch for their kernels immediately. They probably need to backport it anyways, because no distro (well, maybe Arch?) just uses the latest Linux release.
Breaking embargoes generally leads to easy, packaged exploits appearing that many days earlier than they otherwise would have. Obviously patches being delayed gets you on the other end. How are either of those not a huge deal? Do you just think it's not important to have patched systems against published vulnerabilities?
Github doesn't scale at all for large projects. The kernel is averaging over 8 changes an hour, 24 hours a day. That rate of change can never be handled by doing pull requests and web site review. It only can work with email and review and scriptable processes.
That's roughly one every other hour, so still not quite on the same level. And we do use tooling _in addition_ to GitHub, but reviews are done in PR and not over email.
(FWIW I personally wish more projects worked like the kernel's, but such is life)
Eh, I mildly disagree on web site review. Perhaps not GitHub's pull request/review model, but big projects like Android and Chromium do all their review on web interfaces.
Using Gerrit, which is an automated mimick of Linus' development model.
By the way, we've been succesfully using Gerrit on smaller-scale projects as well (around 8 change-requests / day) and don't even want to think about going back to pull-requests.
What you've said is essentially correct, but I'd like to make a minor correction: Chromium uses rietveld whereas Android (and some Chromium related projects) use gerrit. Rietveld and gerrit have similar workflows except that gerrit is more integrated with git (I think). Both are named after Gerrit Rietveld, a famous Dutch designer.
Just because one big project relies on emails does not mean they scale better. They work for the kernel because with that strong organization pretty much anything could work for the kernel.
Criticisms in the interview were essentially about low discipline that Github interface allows, and how they were not willing to adapt to the kernel needs, not about fundamental deficiencies of pull request based model.
So far nothing has been said about why pull requests via a website might handle less load than pull requests over email.
Greg didn't reply but I think the answer is a resounding yes. I wrote this up a bit back when the devs were using BitKeeper. Lemme go find that. OK, ignore the "I want the 1995 era web pages" look and peek at this:
It must be getting close to expiry at this point -- wasn't it filed in the mid 90s?
EDIT:
I did a little digging and some calculations in the USPTO's horrible, horrible expiry calculator and the list of patents in wikipedia (which TBH, I haven't dug into to figure out how relevant they are but the list below is roughly ordered from "fundamental" to "interesting things you can do with RCU"):
If the original RCU patents are expired, it might make more sense for the BSDs to start to use RCU, it makes things much easier in the long run, and should scale a lot better than this proposed solution.
Greg, if you know anybody that has some pull on these kinds of things within IBM, can you contact The FreeBSD Foundation? We might need some kind of "FreeBSD and derivative" exemption or an interpretation of the unexpired patents effects on an RCU implementation. Justin Gibbs said last time he tried to contact IBM he got directed immediately to legal staff and does not have a good contact within IBM.
Try contacting the Linux kernel RCU maintainer, he should be able to look into if IBM is willing to do this or not.
Odds are, they are not, it was a very specific decision to let this be a GPLv2 only patent grant, but as I haven't worked for IBM for almost a decade, I can't speak for them in any form (not that I ever could...)
> If the original RCU patents are expired, it might make more sense for the BSDs to start to use RCU, it makes things much easier in the long run, and should scale a lot better than this proposed solution.
Optimizations and new versions of RCU are patented as well: preemptible variations, scalability optimizations, tiny/embedded variations, and many common data structures.
However, I agree that even the most basic RCU implementation will scale far better than referencing counting for many applications.
The Patent Office considers everything patentable. Last month they issued a patent on literally calling up two pharmacies to pick the one that offered your drug cheaper. They even fast tracked that application and didn't ask a single follow up question about it.
Urgh, no, they don't. If you're ignorant about an issue, you should ask questions, not speak authoritatively about it. And yes, you are clearly ignorant about the issue. I'd guess you would also believe "Amazon just got a patent for taking a photo of something in front of a white background", right?
I believe that should be called out when they're outright lying as the commenter I was replying to clearly was (though probably doesn't know it). I've tried being more "constructive" in the past, and it never does any good; people like remaining ignorant so they can feel indignant
If you mean this [1], then they convinced the USPTO that their process for taking pictures with a white background and making it look good (F-stops, ISO settings, etc.) was patentable. And of course, the patent says "It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure."
Because you could, of course, use other F-stops, ISO settings, etc. That would be really obvious. The whole set it up with good settings a normal photographer might use? That's the original part.
This reminds me of Apple's design patent on a design where "rounded corners" were basically the only feature anyone could name that was similar to Samsung's. So technically they didn't patent rounded corners... they just accused Samsung in court of infringing it with a phone where that was the primary similarity.
This is all part of a big game where patent lawyers find meaningless "limitations" like this to distinguish themselves from the prior art to the USPTO, then come back to the federal courts and demonstrate just how meaningless they are as they assert what might as well be as bad a patent as is the popular understanding.
I don't think I could agree with you less. The Amazon patent claims are EXTREMELY narrow and specific; any deviation from what's claimed (changing F-stop, ISO, number of lights, etc.) is not within the scope of the claim and is therefore not patented and free to be used. You emphasised that making such changes would be obvious, and yes, I agree, but that's irrelevant unless someone tries to patent a similar process with some of those changes made.
"It should be emphasized that the above-described embodiments of the present disclosure
are merely possible examples of implementations set forth for a clear understanding of the principles
of the disclosure."
Please read the above link about how to read a patent. That has absolutely no baring on the scope the pqatent covers. It's just there to cover the lawyer's arse when the examiner tries to say "Your claims don't match your examples so it's not [reasonably based on the description]#". Examples are necessary in a patent application, but should not be used to limit the scope of the actual invention defined by the claims if all the information to produce all embodiments of the invention would be reasonable ascertained from the description.
Again, why on earth bring up the Apple DESIGN patent? They are practically unrelated topics. It's like saying a Nissan GT-R is a shitty car because you don't like the Tiida (hmm, does that geolocate me too much?). just because they both have patent in the name does not mean that one is related to the other.
# I'm not sure what the US term here is, in most of the world it's "fully supported by the description"
You may be right and can argue all you want. It is utterly meaningless in the real world sadly. What you are saying is how the patent system should work, but cannot. You are leaving out the human factors that ruin it.
All it takes is someone to sue you for use of your novel use of an existing idea and you are dead in the water despite it being valid. Most small companies let alone open source communities have no legal resource to defend themselves even if they are demonstrably in the right.
This is all precisely why it shouldn't not be possible to patent minor abstract symbolic expressions of basic human problem solving. It's like suing a child for learning how to walk for having a patent on walking. Simultaneous invention is normal and healthy.
You're right that they technically patented some narrow thing, and that they're trying to avoid limiting the scope of their patent. I am well aware of the fact that a design patent is a separate class of thing, I brought it up because it shows how lawyers being too good at avoiding limitations is a large part of the unreasonableness. When you get right down to it, you can see where they end up demanding hundreds of millions over ridiculous trivialities because triviality is too subjective and they'd rather have a bright line in a stupid place.
Having watched the motion practice for how these actually play out in court, however, I know how they can dance around all the limitations to the point where "they didn't actually patent taking photos on a white background" becomes academic.
At least this is how it plays out in the US. It's possible that other countries have a saner patent system and I would rather like to believe that, but I do not claim much familiarity with how patents are used outside the US.
The patent system in the US is definitely very sick, and much more open to abuse than elsewhere in my experience. There should be much greater penaqlties for entities who sue and lose, and also the first action in any patent cqase should be the deep assessment of the patent's validity (in the US, the doctrine is that the patent office got it right and the patent is valid). If it were a matter of contract law, the contract would have to be found valid before continuing with any prosecution, but not wiuth the social contract probided by a patent.
Someone being sued has a much greater imputus to find relevant prior art than the patent office (not that the office doesn't want to find it, they just don't have time to go the lengths that might sometimes be required). There also seems to be a lot of common sense missing in the US system (and US law in general).