Can confirm, I've reported a similar attack [1], along with a few other vulnerabilities, and also published exploit tools. I ended up getting legal threats from two people that I see frequently posting to sks-devel@ mailing list.
Additionally, Robert (GnuPG maintainer who wrote this Gist) has attacked [2] another person who wrote a proof-of-concept filesystem on top of SKS that was intended to highlight how broken the design is.
I have not seen a single open source community that would treat full disclosure with such contempt.
At this point SKS network continues to run exclusively on community goodwill. This attack seems to be specifically targeted on GnuPG maintainers, if attacker were to deliberately try to break SKS, they would target someone like Linus Torvalds.
Alternatively, there are other published vulnerabilities with exploits that allow to take the whole SKS network down within half an hour, which were published more than a year ago. And yet, those have not been used, so far.
I should have said "any disclosure": EFail was coordinated (6 months notice [1]) and yet GnuPG officially downplayed the risk [2], launched #effail counter-campaign and blamed researchers for bad disclosure [3].
With regards to any of the existing SKS exploits specifically: even if any of them were to undergo coordinated disclosure, it wouldn't have helped: trollwot has been available for 5 years, both keyserver-fs and sks-exploit -- for more than a year. Embargoes don't last that long. All three tools still work.
What GnuPG Project effectively tries to do is to stop people from writing about any security problems period, especially those that are hard to fix.
OK, makes sense. And damn, 10 years is >>> a year.
So then, as a mere user, I gotta ask how so much of the Linux ecosystem -- and indeed, so much of the open-source ecosystem -- came to depend on such a fragile thing as the SKS keyserver network. That's kinda mind-blowing.
There are not that many phone manufacturers that even allow you to change the trust anchor (which makes any of this even remotely possible). For example, Samsung uses e-fuses to burn in their signing key, rewriting recovery will permanently trip their attestation (Knox); other manufacturers use similar practices. Pixels are one of the only currently available phones with user-controlled trusted boot in mind.
Yea. It's bullshit. You can't install something like Magick and then relock the bootloader with a new signature.
A PC with UEFI (except for a few of those which Microsoft locked down) lets you turnoff secure boot, and install your own keys, and turn it back on. So you actively delete the stock keys that boot stock Microsoft/Ubuntu/Redhat, and then custom sign your Grub bootloader or UEFI-Stub Kernel, add that cert to SecureBoot and turn it back on.
You can argue device security all day long, but if manufactures can't update Android security patch sets as they come out, then you have gaps in your device security anyway.
Google controls ASOP. They could literally force manufactures to be compliant, have UEFI or devicetree as a standard, demand every device allow a stock reinstall just like Windows and even create shims to fix the broken Linux driver ABI. But there is more money in planned obsolescence. Gotta throw out that phone after two years and just buy a new one.
I'm currently using Galaxy S9 and it is the first phone I decided not to root. I've always rooted my phones since I was introduced to Android, but this time Samsung tied its crucial functionalities with Knox. And Samsung Pay was just too important to me. Too bad more manufacturers are doing this.
There are more devices supporting this than there used to be though. https://grapheneos.org/#device-support explains that it's going to support other devices. It doesn't support the Pixel 3a and Pixel 3a XL yet either. Supporting each device is a lot of work, and other devices will need to be carefully chosen. It would be harmful to make bad choices about device support and encourage people to buy insecure devices with too many issues that can't be fixed with another OS.
Librem 5 isn't going to be particularly security-focused: no attestation, no trusted boot, most userspace programs are written in memory unsafe languages like C, with no extra effort memory corruption mitigations. Also, Flatpak offers a permission system that's very limited compared to Android.
There is security, and then there is freedom. You can have the most secure system in the world -- but if there are state sponsored, or company back back doors it means nothing.
In FOSS initiatives spent ages building fee and and open software, combating proprietary systems and software that they had no control over.
All that would be loss just to give it up now that we have moved from PCs to phones....
I for one want control over all the software I run on hardware I own. I am not sure why we are so willing to give that control up simply because the platform changed.
> There is security, and then there is freedom. You can have the most secure system in the world -- but if there are state sponsored, or company back back doors it means nothing.
Okay, so you're saying: "If a backdoor is present than your security prioritization doesn't matter, the result is bad." I understand, but:
1. If there is a back door in open source code that goes unnoticed (and it certainly does) because of persistent but bad practices in the open source community (e.g., a stubborn refusal to stop using C-like memory management semantics and primitives when dealing with untrusted inputs), then why don't said accidentaly backdoors invalidate the open source work?
2. Does "control" actually matter in the context of AOSP? Strictly speaking, you have essentially everything you need up utill you hit the hardware drivers. You can easily rewrite that to your hearts content.
3. Given Librem's recently move into commodity-based social products (and the poop-from-great-height attitude they initially adopted), are you genuinely sure that they're actually trustworthy actors? If they're coerced, how will yu attest that they never injected a deeply subtle backdoor on millions of lines of code which you'd like to be unique and less scrutinized?
I can't really work out why you feel the way you do, so I ask these questions.
> persistent but bad practices in the open source community (e.g., a stubborn refusal to stop using C-like memory management semantics and primitives when dealing with untrusted inputs)
This applies to the entire industry. It's not something specific to the open source community. It's also extreme to call the use of C as "bad practice," as any language has its own strengths and weaknesses.
Not the entire industry, as many companies have thankfully moved on from plain old C, or at very least reduced its use quite considerably.
BSD/Linux derived FOSS is still the C stronghold.
The Morris worm was in 1988, since then C has collected enough CVEs due to memory corruption issues to consider its use bad practice.
Something that even Apple, Google and Microsoft security reports now advise against, and with Google actively engaging into taming C's usage in Linux kernel.
The operating system is only a tiny fraction of commercial code out there most of which is either written in (more) memory safe languages like Java, C# or C++. SAPs code base alone is 1 billion lines of mostly C++ and their own proprietary scripting language.
Not to the extent that it is tainted by C's copy-paste compatibility.
Still it does provide a stronger type system, proper string, vectors, reference paramenters and strong type enumerations, to prevent a large amount of C security exploits.
C++ teams that care about security do use such features and respective static analysers on their CI/CD to enforce them.
While it doesn't cover everything, it is much safer than plain C.
Ideally, we will reach a state where both C and C++ get nuked, or ISO C++ just drops its C copy-paste compatibility, which in the end means it is anyway easier to switch to something else.
However that process will take decades, and is hampered by relying on POSIX based systems.
Desktops are dwarfed by mobiles devices. AFAICT a linux kernel variant is present on most of the world's smartphones (with most of the rest being iOS devices, which I know little about), though you've addressed that by saying Google are pushing to reduce the impact of C underlying their system.
I don't want to make a song and dance about C being awesome or anything - we've certainly got massive issues with allowing that extreme amount of flexibility without ensuring that the developer really, really means what they've just told the machine to do - but it's hardly a small enclave that's holding out, it's still huge.
And there are still companies developing in it. I've seen a sort-of-microservices-in-C-implemented-as-a-sort-of-supersized-cgi-bin approach relatively recently.
Windows Phone, JavaScript, .NET (VB and C#) and C++.
iOS, JavaScript, Objective-C, C++ and Swift, C only due to BSD stuff.
Android, Java, Kotlin, JavaScript, C++, C only due to Linux kernel. Its sucessor. Project Treble drivers use Java and C++. Fuchsia is written in a mixture of Rust, Dart, and C++.
ChromeOS, JavaScript, C++, Rust, C only due to Linux kernel
> Windows Phone, JavaScript, .NET (VB and C#) and C++.
An irrelevance given their complete lack of market presence.
The rest all have significant underlying C components you've identified. All I'm saying is that's a hardly a 'niche holdout' when it appears to be at the heart of the vast majority of shipping devices.
Why do you assume that OSS has more bugs than proprietary software? I would probably argue the opposite.
With OSS you get more people working on a project that actually care. A proprietary business project prioritizes making money over actually creating a good product everyone loves.
You're right that this is not a perfect solution. All software has bugs and all software may have malicious back doors. I just find it much easier to trust the development that happens in the open with community involvement than the development that happens in secret where I have absolutely no way see what's going on.
If you had an inkling that someone was trying to poison you, would you rather eat the food you watched be prepared or the food that was prepared in secret? Both dishes might be poisoned, but it's reasonable to prefer the one you were able to examine.
> Why do you assume that OSS has more bugs than proprietary software? I would probably argue the opposite
I don't. But nor do I assume it has less. My point, as restated elsewhere, is that from a user's point of view Openness of Source is more about protecting against negligence.
Who exactly is talking about anti-openness here? We're talking about which open source piece of code to reuse. Someone gave a bad argument against one company's offering.
Microsoft of the 90s, which no one emulates these days and it's a wrongheaded comparison anyways, would have said that all the open options are bad to begin with.
If you meant to say "anti-free software" then maybe we could have a conversation, but that's hardly the problem Microsoft faced in the 90s and 2ks.
Seriously, what does your post mean? Could you maybe be specific? And while we're at it, what's your connection if any with the company that sells Purism phones.
“Open source is not safer because people won’t read the source”, “having control doesn’t matter”, and trying to raise doubts about the trustworthiness of the people involved... that’s old Microsoft textbook approach.
At least MS wasn’t built on open software, unlike Google.
> And while we're at it, what's your connection if any with the company that sells Purism phones.
None at all. I’ve just heard of this project a few days ago via a DDG search.
Believe it or not, not everyone is a corporate shill.
> Open source is not safer because people won’t read the source
That's not what I said. To sum it up: Open source is not really a security proposition. It eliminates problems related to negligence.
> having control doesn’t matter
In what concrete way does the Purism OS give you more control over your device than AOSP?
It really seems like you are confusing open source and free software for this entire conversation, as literally every line of code we are discussing is shared under a license that allows you to look at, modify and use as you see fit.
> None at all. I’ve just heard of this project a few days ago via a DDG search.
The depth of your consideration was already fairly easy to guess, but thanks for being honest.
> Believe it or not, not everyone is a corporate shill.
Bad software is bad whether it's open or not. But historically, closed software has more lock-in. If a particular open lib or component is bad, it can often be fixed by somebody who didn't create it. Or, for those who don't want to touch the scary hairball, it can often be replaced by a completely new hairball written from scratch by a completely different party. Even if there's nothing broken with the original, open software is friendlier to alternatives. It might take a bit of work, but you can replace one open part with another just because it's shinier or smaller or faster or not Oracle or whatever.
I don't trust all open source software, but I trust it by default more than I trust closed software. And I know that if something really bad gets exposed the odds of a solid fix are better in open source. I get to see the warts of OSS. There's public criticism over small details on a lot of important projects. That doesn't happen for closed stuff. Sure, a vendor may have four of the brightest devs in that field and they might hash it all out behind closed doors. The open alternative usually has another four of the top 12 minds in that field along with four pretty competent others and they have a better process for hashing it out.
Then there's that other guy who's not in the top 12 who goes it alone and comes up with something spectacular. So three of the four from the other open project jump on board because they can. And since this new project tries very hard to be backwards compatible, it just snaps in as an overnight replacement. That's part of the awesomeness of OSS.
I believe there is a general lack of awareness of what AOSP is without Google services and add-ons on top of it.
In some facets, AOSP is not a complete and working OS as is. In particular, I have personally had many issues with GPS location for the past fews years. Out-of-the-box, GPS simply does not work without additional non-free software to help it out. Additionally, many (that is, 95%) of all Android apps that you would find on the Google Play store do not function properly without Google services (which AOSP does not have). Applications that are built to run on stock AOSP are not the 'Snapchats' or 'Instagrams' of the world. They are typically FOSS projects that are built out of passion, but recieve little funding or corporate support.
These shortcomings often carry over to third-party ROMs, such as Lineage.
So in my experience, as someone who used to flash a new Android ROM every week, it is not about freedom - its about basic functionality. One could also argue that, since the world operates on all kinds of propietary platforms that aren't available on stock AOSP, so do we also lack the freedom to use AOSP as our daily driver - simply because it often does not interface properly with these propietary platforms.
In general, until we have open source handset hardware to work with, all fine-tuned sensors and clock hardware support will be bad. This is a problem Linux had for a long time, and it took a ton of effort to partially solve the problem. It seems a bit unfair to blame AOSP for not having drivers for specific hardware, that's not it's function.
The big contribution of Purism phones is that more open hardware. After that, the real question we should ask is, "What software platform can offer us the greatest values in the multi-dimensional optimization problem we face?"
It's true though that you wouldn't just flash AOSP. But it's also true that dismissing Graphene BECAUSE it is based on AOSP is unfair.
I am not meaning to paint those who work on AOSP or third-party ROMs in a bad light. The work they do is terrific and great for the community. I also do not mean to dismiss any of the fantastic work that Graphene brings to the Android community.
I am simply stating that the biggest difference between Librem and Android is that there are more hurdles to jump through to provide a completely usable and free AOSP phone to an end-user in 2019. Android has been made to host a Google ecosystem, where the Librem 5 is being created to host an open ecosystem.
It sounds like the Purism team identified this issue ahead of time and decided to provide that open hardeware platform for us.
Librem 5 is not open hardware. I also don't understand why you're comparing hardware to an operating system that's perfectly capable of running on top of it with the strengths and weaknesses of the hardware underneath it. You make it sound like AOSP or GrapheneOS wouldn't run on it. I don't think it would make a very good hardware target due to having so many security regressions from the status quo but it could certainly be one of the official targets. Whether or not it's an official hardware target, people will be able to use GrapheneOS on it.
strcat, many thanks for your interesting explanations.
Could you write more on the state of open hardware, and perhaps point me to open-hardware endeavours that have the slightest chance of success?
I understand that it is an very expensive undertaking to deliver a hardware mashine that is based on an open architecture from the CPU to the actual communication/data storage devices (logical design, actual layout, photolithography, assembly). Since patents on older circuitry must be all expired by now, it must be the lack of money that is the actual stopper for truly open systems.
PureOS seems to have these exact same problems except way worse.
Yes, a significant fraction of Android apps do not work on AOSP without Play Services. And 100% of Android apps do not work on PureOS. F-Droid alone has ~1800 apps. I do not see PureOS or PostmarketOS catching up to that level anytime soon.
FOSS projects that are built out of passion, but recieve little funding or corporate support? Exact same situation on PureOS.
Are the Snapchats and Instagrams of the world going to port their apps over to this entirely new platform when they can't even be bothered to make versions of their Android apps that work without Google's services?
> 100% of Android apps do not work on PureOS. F-Droid alone has ~1800 apps.
This is a fair point. It's not a huge argument for me because I'm only interested in maybe 20 categories of app and I've never been thrilled with the 30 contenders in each category. For instance, if it has only one browser and that one is Firefox, that will be ok with me to begin with. It won't bother me if there are five other choices in F-Droid. But in general, more choice is good, so I grant that this is an important consideration.
> Are the Snapchats and Instagrams of the world going to port their apps over to this entirely new platform when they can't even be bothered to make versions of their Android apps that work without Google's services?
Android without Google's services is a tiny fraction of Android and a smaller fraction of the whole market. PureOS or anything else with even smaller share can expect to be similarly ignored. But Android sans G seems even less likely to go viral than something else.
For one thing, it's too fractured. There is no AOSP brand. There's a bunch of little no-names that happen to offer AOSP under some name that isn't "AOSP" and has no recognition at all. If two or three lower-tier makers offer "Brand C" phones, it could spark. Maybe not in your neighborhood. But if it catches on in India or Malaysia or Brazil, it might be enough to attract Instagram or Twitter. Remember that those companies don't want to depend on Google. They very much want Google out of the picture.
So a handful of apps can legitimize a new platform that is attracting a million or ten users anyway. Then it becomes perilous not to be on that platform. WhatsApp can't afford to let some up and comer get a foothold just because WhatsApp wasn't available on the viral new platform.
Ahhhhhh. Ok I'm going to quit dreaming for now and get back to work. I'm not holding my breath, but I do think it can happen. It just takes the right lucky timing. There have been so many helps lately that I think if there was something ready to take advantage of these incidents, the timing is right.
> I believe there is a general lack of awareness of what AOSP is without Google services and add-ons on top of it.
That lack of awareness seems to be your own.
> In particular, I have personally had many issues with GPS location for the past fews years. Out-of-the-box, GPS simply does not work without additional non-free software to help it out.
GPS doesn't require Play Services, etc. Play Services provides supplementary network-based location services for providing a coarse, inaccurate location estimate without waiting for a while for a GPS lock. The infrastructure for this is open source and part of AOSP. It has generic, provider-agnostic support for services like supplementary location providers, text-to-speech, speech-to-text, geocoding, etc. Play Services is what provides these on phones with Google Play, but there are alternative implementations used by Amazon and in China.
> Applications that are built to run on stock AOSP are not the 'Snapchats' or 'Instagrams' of the world.
Yet apps like WhatsApp, Facebook's apps, Microsoft's apps, etc. do work without Play Services... despite what you claim. A lot of these mainstream apps do work fine, and there's a large ecosystem of open source apps that are mostly designed to run without Play Services. Providing the Play Services APIs with an alternate implementation and is also certainly possible, although I would prefer a different approach than microG.
How is any of this resolved by moving to a completely different OS with far less privacy and security, none of these mainstream applications you talk about and barely any open source application ecosystem by comparison? I don't get it.
You seem to be off on the state of Google Play Services from a real-world standpoint. Case in point: Microsoft's core apps like Outlook and Skype don't work without Google Play Services enabled, even if you find the APKs somewhere other than the Play Store to sideload them.
Microsoft's apps are specifically an example I've given of how closed Android truly is: Even Google's competitors, which have all of the same service capabilities, are essentially forced to use Google Play Services. Especially when you consider the other top HN item today about how Google now essentially requires all apps use a closed source Firebase library for push notifications.
And while yes, Google Location Services is a location provider that slots into Android, you are missing that Google has convinced app developers to call it directly, rather than using the Android location provider. This means that no alternate location provider will do: Google Location Services is hard coded into almost every location-based Android app today.
If you're willing to make your location known in order to take advantage of location services why wouldn't you want the very best possible service? There are complicated workarounds that can be used in place of Google's location services but none of them are anywhere near as easy to implement for the app developer or as easy to use or as accurate for the end user.
GPS doesn't make your location known at all, it's receiving only. It sends information about your location to nobody, it triangulates your position from publicly broadcast signals.
And, I would much rather "make my location known" to about fifty other companies before I would want Google to have it.
Yeah, GPS is actually insanely cool technology, and the US making it available to everyone was a real public service. Now of course, other nations are, partially for defense purposes of course, deploying similar networks as well.
And it's just out there. Usable with no subscription, no account, nothing. It's just free data.
Notice I didn't specifically mention GPS, although I agree that it is pretty cool. That said, GPS alone isn't capable of providing the UX that end users expect from a modern app. Fused Location is required for more accurate location information and it isn't passive like GPS.
I used CopperheadOS (without GApps) on a Nexus 6P as my daily driver for almost 2 years. Very few "mainstream" apps worked; they would loudly complain about the lack of Google Play Services, and at best would lose functionality (e.g. Slack, which apparently relies on Play Services for notifications) or at worst would crash either immediately or within a few minutes after launch (multiple reasonably-popular online dating apps had this problem).
In short, of the apps I tried that weren't distributed via F-Droid, most of them suffered from varying degrees of brokenness without Google Play Services (and these same apps work fine on my HTC One M8 and my current-daily-driver OnePlus 5T, both of which run LineageOS w/ GApps).
You're right, though, that "some Android apps work fine" is a better situation than "no Android apps work at all". Hopefully GrapheneOS can leverage that advantage well. It'd just be useful to acknowledge that it ain't all sunshine and rainbows just because it's AOSP-based; whether it's microG or something that ain't a security landmine waiting to blow off someone's leg, addressing that issue with an alternative service provider would be a game-changer, and would readily address the one issue I ever had with CopperheadOS (and - it seems - likely would still have with GrapheneOS).
Have you tried a pure AOSP + F-Droid on Nexus/Pixel or Xperia? It's quite good. The only major drawback are closed drivers. But the userland is nice, open and polished.
My worry with Librem and all those initiatives is that rebuilding an ecosystem like F-Droid takes a lot of effort and time.
I primarily used a OnePlus 3 (non-3T) and a Nexus 4. The OnePlus 3 seemed to have a very active ROM community.
I tried many of the well-known ROMs: Lineage, Paranoid, Ressurection.
I also tried many of the OnePlus-specific ROMs, that were typically maintained by only one or two devs each.
Most of the features worked perfectly fine on both phones. But the deal-breakers were often the simple things: GPS (w/o downloading extra geolocation database services) and Bluetooth were the kickers for me. These services were consistently spotty across every ROM I tried.
My experience is as of a couple years ago. I have since moved away from the ROM scene, simply because I do not have the time to deal with this sort of stuff anymore.
A Pixel running stock AOSP with F-droid and Chromium is the bleeding edge of what's possible with open source. There's no better UI/UX in existence and the tragedy of it all is that outside of Android developers and software engineers most people never get to experience it at all.
The reality is that Librem is unnecessary because we have F-droid. There's nothing wrong with F-droid and as time goes on more mainstream apps will continue being brought over.
Too bad the Pixel doesn't have a headphone jack, otherwise I would have bought one. I've also heard it was pagued with hardware issues. Stuck on Nexus 5 + LineageOS for the time being.
GrapheneOS is sadly only available on Pixel devices.
I tried microg's Lineage image on the OnePlus 3. It was probably the most painless ROM I had ever flashed.
The microg project has been fantastic in providing an open mechanism to interface with Google's services. When I first tried it, I believe they did not yet have a working implementation of all the Google services. Some apps complained about Google services, some did not. You still needed to sign into Google though, which might turn some people away.
For those who want to interface with Google on an open-source ROM, microg's image is probably the way for you.
I for one cannot attest how many backdoors this Ubuntu installation might have, and I doubt everyone has the knowledge to validate their complete FOSS stack.
> As such I have a very hard time believing that Librem with be as secure as modern Android.
Android isn't secure, it's limited, that's the whole problem here. Any security you can't control isn't a security feature but just a limitation. It's "secure" because you can't do anything interesting with it.
Using Android with reluntance, since my favourite OSes were Symbian and Windows Phone, additionally I dislike Android J++ and the NDK is relatively constrained.
Still, I can do plenty of interesting things with my Android gadgets.
I guess that's because they know that Chain-of-trust only gets you so far.
Eventually you're running something big with bug after bug found every month and and an attack surface that includes the local filesystem and the network. At that point the buzzwords make no difference.
Chain of trust won't reduce the attack surface, but adopting memory corruption mitigations and replacing C code with something with stronger memory-protection guarantees would -- and while some kinds of memory protection can be bolted on later with minimal disruption, minimizing C is best done from the start.
That sounds reasonable, but here we are with Android's endless security disaster and all their apps written in not-Java from the beginning.
The most cancerous aspects of Android are by design, that you cannot control network exfiltration from apps, you cannot update or modify the OS pieces at will, and the apps are monetizing everything you do and everything they can find against you. Librem will answer these.
> that you cannot control network exfiltration from apps
GrapheneOS has a Network permission toggle, which is one of the features already restored from the past work on the project. There are many other privacy and security features that still need to be ported to the latest release, although a lot of them have become standard features especially in Android Q. https://gist.github.com/thestinger/e4bb344dcc545d2ee00dcc22f... is an overview of the Android Q privacy improvements(not security improvements, just privacy) in the context of GrapheneOS. To conserve development resources, the past features that are becoming standard aren't going to be ported over rather than just waiting for the standard implementations being released around August. Some of them will need to be adjusted to make them a bit more aggressive when it comes to apps targeting the legacy API level, but that's a lot less work than maintaining downstream implementations of all of this.
> you cannot update or modify the OS pieces at will
Having a well-defined base OS with verified boot and proper atomic updates with automatic rollback on failure is a strength, not a weakness. It's the same update system (update_engine) as ChromeOS. The update system is not the problem with the broader Android ecosystem with lack of updates to vendor forks. The migration towards everything being apk components that can be separated updated rather than moving more towards the ChromeOS design is a negative thing in terms of GrapheneOS and it's one of the things that has to be changed downstream to improve verified boot.
> Librem will answer these.
That's nonsense. First of all, that's hardware, and also moving to a far less secure software stack with non-existent privacy and security, an inferior update system and no verified boot is not a solution to these problems. The solution to privacy and security problems is not completely throwing away privacy and security...
>The migration towards everything being apk components that can be separated updated rather than moving more towards the ChromeOS design is a negative thing in terms of GrapheneOS
Doesn't stuff like fs-verity help with stuff like this instead of just a block based RO partition that can be verified ? Overall, for the android ecosystem, it seems like a net gain if google moves more and more stuff out of band away from OEMs as OEMs are not incentivized to do anything other than sell devices. That is, as long as everything is still pushed to AOSP.
Large majority of Android security exploits are in C and C++ written drivers, hence why with each release the amount of freedom with native code gets further locked down.
Chain of trust does protect you from evil maid attacks.
And yes, there can be bugs in application layer, but at least half of all CVEs are memory corruption bugs.
These practices do offer a massive reduction in attack surface. You seem to argue it doesn't matter since it doesn't eliminate attack surface completely.
No, chain-of-trust only has one trick... it can check that what you're about to run is unaltered from what was signed to some degree of probability.
If that is the - shipped and validly signed - bugridden nightmare-fuel like the propreitary Qualcomm 802.11 stack or proprietary multimedia bits that are a rich and continuous source of vulnerabilities (take a look through the last months here https://source.android.com/security/bulletin/2019-06-01 ) all the buzzwords did was ensure the vulnerable version is running so it can be exploited. The evil maid can get in that way.
Librem's security model is that of a Linux box, signed update packages... it's not a panacea against hacks but nor are the buzzwords you mentioned. At least they're trying to eliminate the really dangerous proprietary pieces that constantly provide new vulns.
> No, chain-of-trust only has one trick... it can check that what you're about to run is unaltered from what was signed to some degree of probability.
This is only one of many privacy and security regressions from moving to a far less secure software stack without anything close to the same level of hardening or work on privacy / security.
> If that is the - shipped and validly signed - bugridden nightmare-fuel like the propreitary Qualcomm 802.11 stack or proprietary multimedia bits that are a rich and continuous source of vulnerabilities (take a look through the last months here https://source.android.com/security/bulletin/2019-06-01 ) all the buzzwords did was ensure the vulnerable version is running so it can be exploited. The evil maid can get in that way.
Counting CVEs is not a way to judge security. Qualcomm's SoC hardware, firmware and driver security is the leader among the available options. The huge amount of both internal and external public security research targeting it is a strength rather than a weakness. The lack of attention given to other assorted drivers is not a strength of those drivers but rather reflects their obscurity and lack of hardening / auditing.
It's also not the norm in the Linux world to assign a CVE for a security vulnerability when it's fixed. The norm is to fix them silently without trying to obtain a CVE. It's completely bogus to judge security based on counting CVEs for many reasons. Not having public lists of the fixed vulnerabilities with CVEs assigned doesn't mean there aren't a bunch of vulnerabilities being fixed, and it's even worse if the vulnerabilities aren't being found and fixed.
Every x86 and ARM device is proprietary and has a massive amount of complex proprietary hardware, firmware and microcode. There is no escaping that for these architectures. The Librem 5 is not an open hardware device and has a proprietary SoC, proprietary Wi-Fi, etc. all with their own proprietary firmware and in some cases entire operating systems (Wi-Fi / Bluetooth, cellular, etc.). The distinction of an OS like PureOS is that they don't ship updates to this firmware but rather leave it vulnerable to all the fixed security issues, because they won't redistribute the proprietary firmware updates. The firmware is still present, but the OS is 'free'. Either way, that firmware is running, and with a bunch of known vulnerabilities if you don't update it.
Proprietary hardware and software is also not inherently less private or secure than open source software. These are differences in development model, not privacy or security. You're very mistaken if you think open source software eliminates backdoors / vulnerabilities or even reduces them. It's not how things work out in reality. Open source reduces the barrier to entry for security research, whether it's for good or evil, but it's certainly still possible without it being open source. Either way, the comparison you're making is between proprietary hardware + proprietary firmware + open source OS to proprietary hardware + proprietary firmware (but without updates shipped by the OS) + open source OS.
> Librem's security model is that of a Linux box, signed update packages
Again, you're mixing hardware and software. The Librem hardware isn't only for PureOS and will be able to run Android.
Signed update packages alone are inferior to not only having signed update packages but also verified boot and attestation. GPG also has far too much complexity and attack surface for this, and having online build / signing servers, etc. is a joke.
Android is Linux, and the Linux kernel is not a strength but rather the most prominent weakness in Android. A massive monolithic kernel written entirely in a memory unsafe language and entirely responsible for enforcing the low-level privacy/security model is not a strength. That's a major problem which needs to be resolved, not a hole to dig deeper. It's fundamentally not fixable and while a bunch of work on mitigations can help, it's very limited in what can be achieved. Moving to the desktop Linux software stacks also gives up the vast majority of these mitigations and the security model that has been rapidly improved over the years. It gives up having such strict SELinux policies developed as an integral part of the base system, as just one of many things that are lost. This level of security cannot be obtained on a traditional Linux distribution without a well-defined base system that's developed together with lots of holistic systems level privacy and security work. Addressing it in a bunch of separate fragmented projects doesn't work out, and prevents having the same kind of security model and security policies. The way that SELinux is used on Android compared to a distribution like RHEL / Fedora is day and night. It's drastically different and not even comparable at all. The same goes for the deployment of other privacy and security features / models.
Kind of, Google can release Android running on top of any OS that implements the NDK stable APIs, plus their POSIX subset, and besides OEMs no one would notice the change.
Yes, that's true. I mean the Android Open Source Project, rather than Android as an OS family. For Android as a platform defined by the Compatibility Definition Document / Compatibility Test Suite, it doesn't have a specific kernel, and Windows could have become certified as Android if they had actually gone ahead with pursuing that.
> isn't going to be particularly security-focused: no attestation, no trusted boot
This is only true initially, presumably due to time and funding constraints. From the FAQ (https://puri.sm/faq/):
> What are your plans for tamper-proofing the Librem 5?
> We hope to have a version of PureBoot available for the Librem 5 for users who want to verify it with a Librem Key. We cannot commit to it being available at launch but it’s a goal.
I imagine it would be possible to get Genode running on the Librem 5, which would be even more secure than Android. Only you'd be limited in what applications you can run.
Still, even on Linux, you can set up SELinux or Apparmor to harden your system as much as possible, run untrusted applications as a different user, compile your own hardened kernel, and so on. It's going to be a less secure system for casual users, but it'll allow power-users to more easily (you can do that on Android as well, but it's more difficult) secure their system as much as they want.
> It's going to be a less secure system for casual users, but it'll allow power-users to more easily (you can do that on Android as well, but it's more difficult) secure their system as much as they want.
No, it really won't. Doing substantial privacy and security hardening requires a years of work by a team focused on it and the OS needs to be developed with it in mind. Sure, you can enable SELinux elsewhere, but you won't have anything remotely comparable to the complete, full system SELinux policies developed as part of the Android Open Source Project and deeply integrated into it. You're talking about users doing all this from scratch somehow when there is hardly any interest in it for that ecosystem. There's barely any application sandbox or permission model to speak of and projects like Flatpak are not approaching it in a meaningful way that avoids trusting apps.
You're suggesting throwing out having an application security model and all this privacy / security work to reinvent it all from scratch for a new ecosystem without existing applications. It's hard to understand how that makes anything easier.
Having the well-defined base OS with verified boot and clear separation between the OS and applications which are sandboxed and offered capabilities via a permission model is crucial. It's not an advantage for security to completely do away with that. It's important to implement each feature / capability in a way that fits into the overall security model. Developers love taking shortcuts and doing this in a lazy / negligent way, and you can see exactly that with how people implement features via the shortest path of depending on app-accessible root instead of doing it properly, even when that's a niche thing.
Attestation of what? Software security is inferior in Android (hello leaky API), hardware is untrusted in Librem sinde Day 0. Show me a TPM chip with open firmware or it's a security disaster on my board. Seccomp is a thing. Also, Flatpak is the last thing I would concider to use.
Yes, I don't even customize mine. I leave the OS, tools in default configurations. I do my work in the cloud. If my computer is hit by lightening, I toss it in the dumpster and get another one. Back to work within an hour.
My toaster is also boring, and my pipe wrench, and my floor jack. Just tools.
As a Linux distribution maintainer, I second that. There are not that many packages that still use SCons, but ones that do often require boilerplate and/or patches to honor standard environment variables like CC, CFLAGS, LDFLAGS. Cross-compiling a SCons package is a nightmare.
If you like that SCons uses Python, I'd suggest to try Meson. It does everything right and exposes a subset of Python API: https://mesonbuild.com
How easy is it to extend Meson for languages that it doesn't ship support for by default? That always seemed to be one of the strong points of SCons/something many modern build systems don't support so well.
Meson developers are highly opinionated regarding what "should" and "should not" be supported, and aggressively block anything that might be considered, or even lead to "badness". Sometimes this is justified, but it makes one's job very difficult if one disagrees with them. For example, they insist on completely integrating Rust into the build framework, but also (apparently) don't have the manpower to actually implement that, so, coupled with the fact that extending the language is impossible from within the language, practical Rust (i.e. any dependencies, any Cargo use at all) is difficult to impossible.
It's not possible in the way it's possible in SCons. You can basically use custom_target, but its usefulness is limited since there are no user-defined functions in Meson's subset of Python. CMake is a little better in this regard here, since it at least offers macros and functions.
> Nitpicking is more excusable for women to do. That's ultimately because men talk to others
> exchange important information, while women only talk to others for social negotiation and
> to test the social standing of others.