My guess is that this is a great way for them to standout, fill a niche, and get tons of free advertisements in order to gain back some of their Android market share.
Motorola has effectively lost in the Android market and are on downward spiral into irrelevance (already there?), so they have to do something different.
Add to that existing grapheneos users at best only care about good enough performance and a good camera, the selling feature is security and so a lot less overhead to market such a phone. Those who want the latest features will continue to buy pixels, Samsung, and iphones. The only thing I feel is missing from the picture at a quick glance is a tablet for the few who want a secure tablet device.
An open standard that has attestation in it which allows sites to block all open implementations. FIDO Alliance spec writers have even threatened that apps like KeepPassXC could be blocked in the future because they allow you to export your keys.
The export is end to end encrypted, so you do not have ownership of the data, and the provider (Apple in this case) has full control over who you are allowed to export your keys to. (Notice how there are no options to move your keys to a self-hosted service.)
On what OS are you using these settings? I found that Windows will refuse to allocate more virtual memory when the commit charge hit the max RAM size even if there is plenty of physical memory left to use.
I have 64 GiB of RAM and programs would start to crash at only 25 GiB of physical memory usage in some workloads because of high commit charge. I had to re-enable a 64 GiB SWAP file again just to be able to actually use my RAM.
My understanding is that Linux will not crash on the allocation and instead crash when too much virtual memory becomes active instead. Not sure how Mac handles it.
Linux. I also don't typically run any individual workload that would consume all system ram. Not loading a giant scientific model into memory all at once for fast processing or anything like that, so mileage will certainly vary based on requirements.
There's a massive difference between having a country spying on it's own citizen versus having an adversarial country doing it. The three-letter agencies would likely not be trying to sabotage or destroy their own country's economy and global standing for one.
It's concerning that someone from the EU is still asking this question. How is there any doubt left in you? Yes, of course both are adversarial countries, and shouldn't be treated all too differently. In the short-term, the US is the bigger threat, as they've shown they're much more willing to use the power they have to cut off access than China.
As someone from the US I would suggest viewing both as adversarial. I don't really trust my own government, but if I was born abroad I would trust them even less.
You absolutely can. We see a huge uproar in European enterprises against US software/vendors/etc. Many companies are halting their cloud migration because they are now worried that the current US government could decide to just pull the plug or something otherwise inane.
Wouldn't having an adversarial country to be spying on you be the better option for you personally? At least privacy wise, not using your machine as some infiltration point, as the country you reside in has many more opportunities to abuse the data
Of course, but when you add enough noise you lose the signal and as a consequence no PRs gets merged anymore because it's too much effort to just find the ones you care about.
Don't allow PR's from people who aren't contributors, problem solved. Closing your doors to the public is exactly how people solved the "dark forest" problem of social media and OSS was already undergoing that transition with humans authoring garbage PRs for reasons other than genuine enthusiasm. AI will only get us to the destination faster.
I don't think anything of value will be lost by choosing to not interact with the unfettered masses whom millions of AI bots now count among their number.
That would be a huge loss IMO. Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.
Then there's the security concerns that this change would introduce. Forking a codebase is easy, but so are supply chain attacks, especially when some projects are being entirely iterated on and maintained by Claude now.
> Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.
Exaggeration. Is SQLite halfway to closed source software?
Open-source is about open source. Free software is about freedom to do things with code. None is about taking contributions from everyone.
For every cathedral (like SQLite) there are 100s of bazaars (like Firefox, Chrome, hundreds of core libraries) that depend on external (and especially first-time) contributors to survive (because not everyone is getting paid to sling open-source).
Is there a reason that you chose SQLite for your counterpoint? My hot take: I would say that SQLite is halfway to closed source software. Why? The unit tests are not open source. You need to pay to see them. As a result, it would be insanely hard to force SQLite in a sustainable, safe manner. Please don't read this opinion as disliking SQLite for their software or commercial strategy. In hindsight, it looks like real genius to resist substantial forks. One of the biggest "fork threats" to SQLite is the advent of LLMs that can (1) convert C code to a different langugage, like Rust, and (2) write unit tests. Still, a unit test suite for a database while likely contain thousands (or millions) of edge case SQL queries. These are still probably impossible to recreate, considering the 25 year history of bug fixing done by the SQLite team.
And how does one become a maintainer, if there's no way to contribute from outside? Even if there's some extensive "application process", what is the motivation for a relatively new user to go through that, and how do they prove themselves worthy without something very much like a PR process? Are we going to just replace PRs with a maze of countless project forks, and you think that will somehow be better, for either users or developers?
If I wanted to put up with software where every time I encounter a bug, I either have no way at all to report it, or perhaps a "reporting" channel but little likelihood of convincing the developers that this thing that matters to me is worthy of attention among all of their competing priorities, then I might as well just use Microsoft products. And frankly, I'd rather run my genitals though an electric cheese grater.
You get in contact with the current maintainers and talk to them. Real human communication is the only shibboleth that will survive the AI winter. Those soft skills muscles are about to get a workout. Tell them about what you use the software for and what kinds of improvements you want to make and how involved you'd like your role to be. Then you'll either be invited to open PRs as a well-known contributor or become a candidate for maintainership.
Github issues/prs are effectively a public forum for a software project where the maintainers play moderator and that forum is now overrun with trolls and bots filling it with spam. Closing up that means of contributing is going to be the rational response for a lot of projects. Even more will be shunted to semi-private communities like Discord/Matrix/IRC/Email lists.
Sadly VeraCrypt is not optimized for SSDs and has a massive performance impact compared to Bitlocker for full disk encryption because the SSD doesn't know what space is used/free with VeraCrypt.
VeraCrypt can be set to pass through TRIM. It just makes it really obvious which sectors are unused within your encrypted partition (they read back as 00 bytes)
Oh I did not know of this option, thanks! However, I was wrong about the reason for the performance loss on high speed SSDs and the issue is actually related to how VeraCrypt handles IRPs: https://github.com/veracrypt/VeraCrypt/issues/136#issuecomme...
Forgive me this shameless ad :) with the latest performance updates, Shufflecake ( https://shufflecake.net/ ) is blazing fast (so much, in fact, that exceeds performances of LUKS/dm-crypt/VeraCrypt in many scenarios, including SSD use.
The performance loss can be substantial on modern NVMe drives, up to 20 times slower. But I was wrong about the reason for the performance loss, it's not TRIM but how VeraCrypt handles I/O operations. You can see some numbers real numbers in this Github issue: https://github.com/veracrypt/VeraCrypt/issues/136
I've tried MacType before but sadly it came with significant slow down in many applications, lists would lag while scrolling, etc.
It's really annoying because all I really want is to disable ClearType on my primary high DPI monitor while keeping it with default settings for my two side monitors, but Windows does not let you configure it per monitor.
My impression was that you would apply this filter after the logs have reach your log destination, so there should be no difference for your services unless you host your own log infra, in which case there might be issues on that side. At least that's how we do it with Datadog because ingestion is cheap but indexing and storing logs long term is the expensive part.
Motorola has effectively lost in the Android market and are on downward spiral into irrelevance (already there?), so they have to do something different.