For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more at612's commentsregister

Download Signal? No, thank you.

The fact that the guy behind it is hyping it via the New York Times, a generalist publication, instead of validating the thing through professional cryptographers (which he isn't) and recognised privacy champions such as the EFF is very telling.

The thing has not been properly validated or verified (for a start, because there is no design document to validate against, and no published goals to verify against), it uses an ad-hoc encryption scheme from a non-cryptographer, it is not open source (see F-Droid discussion why it's not there), it uses hardwired servers controlled by a party or parties which are not known to be trustworthy, and apparently it requires Google Play services, which nobody who is truly concerned about their privacy is going to use in the first place (and definitely one should not).

From the way this is going, it is becoming clearer by the day that this is just another start-up, their target market are unsophisticated but paranoid users and hipsters with no real need for privacy but who think they should make some kind of statement. Their plan is to hype it up (e.g., via the NYT), get enough users, then get bought by one of the so-called "social media" players. It is more attractive to them than Telegram because the latter is run by a Russian, which to the American public sounds sinister (Mr Brin and countless other great scientists and innovators notwithstanding), and their servers are probably based in Germany, which is a bit more of a problem since there are (still) some proper privacy laws over there, and which would cause some headaches to the acquiring party. Besides which, there is a good chance that their current investors come from those "social media", or are the usual Silicon Valley VC crowd, so things stay between friends, as it were.

So, in brief:

* If you want a new Skype, go for it.

* If you care about the privacy of your communications, you should avoid it.

* If you need to keep your comms private, you must avoid it.

Anyone disagrees? Feel free to reply and tell me why!


I'm probably just inviting myself to get trolled by replying to this, but this comment is just ridiculously wrong on so many levels.

> The fact that the guy behind it is hyping it via the New York Times, a generalist publication, instead of validating the thing through professional cryptographers (which he isn't) and recognised privacy champions such as the EFF is very telling.

Cryptographer Matthew Green on Signal's crypto and code quality (it was called RedPhone/TextSecure at the time of this writing): https://blog.cryptographyengineering.com/2013/03/09/here-com...

Version 1.0 of EFF's Secure Messaging Scorecard gave Signal 7/7: https://www.eff.org/node/82654.

> The thing has not been properly validated or verified (for a start, because there is no design document to validate against, and no published goals to verify against)

Signal has been analyzed, with favorable results, by academic researchers at least twice:

- https://eprint.iacr.org/2014/904.pdf - https://eprint.iacr.org/2016/1013.pdf

> it uses an ad-hoc encryption scheme from a non-cryptographer

Moxie Marlinspike and Trevor Perrin probably wouldn't call themselves "cryptographers," but almost anybody in the field would agree that they are experts on applied cryptography.


> I'm probably just inviting myself to get trolled by replying to this

I'm sorry that you get that impression, but I do appreciate your input.

> Cryptographer Matthew Green on Signal's crypto and code quality (it was called RedPhone/TextSecure at the time of this writing)

That's the application that they sold to Twitter, not the one being talked about here. I do not know how different the code bases are.

It is also around that time that the app had a gaping, amateurish hole in that it was simply leaking everything via logcat. And what does the guy do? Instead of addressing the issue like a professional, he goes on a complete tangent rubbishing F-Droid (https://github.com/WhisperSystems/Signal-Android/issues/53) and then making rather poor excuses as to why you should get your application from the Google store and not from anywhere else.

Excuses which by the way, have been evolving over time. I think he eventually admitted that he wants to keep track of how many users are using it (handy to show to your potential buyers).

He also has a history of lying, such as when he used fake WHOIS details to run his "Google anonymiser" thing. And of course, when he was shut down by the registrar, as you do when someone has given you false details, what did he do? He went to the press to whine about the registrar! After he entered a contract in bad faith, something which happens to be a prosecutable offence. That's the sort of person we are talking about here. I hope you will understand if his word does not exactly fill me with confidence.

> https://www.eff.org/node/82654.

That page starts with: "This is version 1.0 of our scorecard; it is out of date, and is preserved here for purely historical reasons."

And continues with: "the results in the scorecard below should not be read as endorsements of individual tools or guarantees of their security"

> Signal has been analyzed, with favorable results, by academic researchers at least twice:

Yes, I am aware of those. And that is not what validation and verification is which, as I said, in the absence of publicly available design documents, is impossible to do independently. The guy is trying to make it look like he's selling a "secure" communication platform, but if you presented that to a defence contractor (which I have some experience with) you would be laughed out of the building. Proper security is not done like this at all. For a start, you actually define your goals, i.e., what you intend to secure, against what threats, etc., etc. If you can show me a paper with that information I would be grateful.

Notably, you may have noticed that those papers, like Green's, are a protocol analysis, not an analysis of the entire solution. In that respect, you're back to the previous situation: the protocol might be ultra-secure, but if you're still leaking your plaintext on a different channel...

> Moxie Marlinspike and [...] probably wouldn't call themselves "cryptographers,"

At the risk of sounding elitist, what is his academic background? (I elided the other person because I do not know who he is).

> but almost anybody in the field would agree that they are experts on applied cryptography.

What do you base that conjecture on?


>He also has a history of lying, such as when he used fake WHOIS details to run his "Google anonymiser" thing. And of course, when he was shut down by the registrar, as you do when someone has given you false details, what did he do? He went to the press to whine about the registrar! After he entered a contract in bad faith, something which happens to be a prosecutable offence. That's the sort of person we are talking about here. I hope you will understand if his word does not exactly fill me with confidence.

I really don't see why someone should be on my shitlist for lying to godaddy dot com or whatever giant registrar unless you consider fudging identifying details about something that really doesn't matter, especially considering he was very openly associated with the project, some sort of horrible moral offense. I especially find your taking massive umbridge with fudging personal information baffling given how privacy-minded you otherwise seem.

>At the risk of sounding elitist, what is his academic background? (I elided the other person because I do not know who he is).

Combined with the above, the way you're hand-waving away the other of the two original developers of the protocol really just makes it seem like the position you've taken against Signal is mostly predicated on some sort of grudge against Marlinspike himself. Yes, trashing F-Droid was not a great thing to do and you might see him as someone with a strong penchant for self-promotion, but the way you keep on tying your criticisms to Marlinspike personally really muddles your case. For example, you object to him promoting Signal in a New York Times piece saying it is a generalist publication and posit he's just trying to drum up attention so he can find a buyer, which may or may not be true, but isn't one of the most important goals of a secure messaging application to get people to actually use it and to achieve widespread adoption? The main lesson I've learned from GPG mail is that a perfectly private means of communication is worth very little if I can't actually convince anyone to use it with me.


> I really don't see why someone should be on my shitlist for lying to godaddy dot com or whatever giant registrar unless you consider fudging identifying details about something that really doesn't matter,

I think I can see where you are coming from. You seem to compare this with, say, opening a GMail account under an alias, if I understand correctly.

However, holding domain names and, at the time, SSL certificates requires a different sort of accountability. I can elaborate on that if you wish, but I trust it won't be necessary.

> especially considering he was very openly associated with the project,

In the same way that Mr platinumrad or Ms at612 are associated with this discussion? By the use of an alias?

> some sort of horrible moral offense.

Yes. And please note he did not just lie to the registrar. When he got caught, he went and whined to some journo who published a piece criticising the registrar without bothering to contrast the information first. It all being presented as if it was the registrar in the wrong, when they were following the rules, which are there to protect the public in the first place. This coming from some bloke who was saying "don't trust Google, trust me. Because."

> I especially find your taking massive umbridge with fudging personal information baffling given how privacy-minded you otherwise seem.

I value my privacy. At the same time, when I enter a contract, I do so in good faith and of course part of it is letting the other party know who I am.

> really just makes it seem like the position you've taken against Signal is mostly predicated on some sort of grudge against Marlinspike himself.

Yes, you are correct. My apologies if that wasn't clear. I question the ethics, motivation, and competence of this one individual, who happens to be closely associated with said project.

> Yes, trashing F-Droid was not a great thing to do

To put it mildly. On an incidental note and more generally, have you ever seen him do a mea culpa?

> [but] isn't one of the most important goals of a secure messaging application to get people to actually use it and to achieve widespread adoption?

I do not know. I would guess not (based on defence experience). But the main point is that him saying "oh sure, it's secure" does not make it secure. He seems to be taking advantage of the public's inherent credulity and lack of awareness of what "security" actually means and involves. We have gone through this discussion already, so for an example of what I consider a better developed and correctly presented security solution, please see the Conversations IM application.

> The main lesson I've learned from GPG mail is that a perfectly private means of communication is worth very little if I can't actually convince anyone to use it with me.

This is a different, and long discussion, but it is probable that the reason why you are seeing that is the other party having mentally (or formally) done a cost/benefit analysis and deciding that their information is not of such value to justify the extra effort to protect it. Rightly or wrongly.


I think that issue highlights the problem with unofficial repositories. Users remained vulnerable because their upstream provider didn't update quickly enough. It culminated in a user spamming the official issue tracker with an outdated and annoying bug report.

This isn't just unique to Android: there are multiple ongoing efforts at the moment in the Linux world to lessen frustrations with distribution repositories. Snappy, Flatpak, and AppImage intend to unify application deployment and allow users to install applications from anywhere. In most cases, this could mean pulling directly from the application developer themselves. GNOME and KDE will likely encourage this.

I know some Firefox developers who have grouched at the delay between official releases and when distributions finally deploy them, so this problem isn't exclusive to desktop environment developers.

Back to Android: Moxie had a point when he claimed that Android is more privileged to have a system that provides package verification back to the original developer. It doesn't matter where you get an APK from: the developer's website, Google Play, APKMirror, or Bittorrent. If you have the developer's public signing key, you can verify the authenticity of the APK.

F-Droid represented a serious step backwards in Android security, back when they used to self-sign APKs. It wasn't possible any longer to cut out the distributor from the chain of trust. Fortunately, they reacted to Moxie's criticisms, and F-Droid now retains the original package signature when the build can be reproduced.

From a developer perspective however, encouraging or even tolerating unofficial installation channels for secure communication software is bad. If vulnerable users are in-contact with non-vulnerable users, they unknowingly put both parties at risk. If the ecosystem evolves to the point where this is common, the whole system is insecure.

What Android desperately needs is a high-quality, non-profit, privacy-friendly, charity- and grant-driven app store. It must entice open-source app developers. It cannot do self-builds, except for reproducibility. It needs crash-reporting, analytics, usage metrics, device-specific builds, localization options, and more. It requires dead-simple tools for command-line deploying.

Until then, in my opinion, F-Droid will never be accepted by app developers. F-Droid is for users only. Not for the same purposes, either: for the cautious user, F-Droid mainly shines as a locally-setup repo for self-deployed apps.

P.S.: Perrin & Moxie recently began documenting Signal Protocol: https://whispersystems.org/docs/


> From a developer perspective however, encouraging or even tolerating unofficial installation channels for secure communication software is bad.

What is your threat model?


> it is not open source

Huh?

https://github.com/WhisperSystems/Signal-Android

I'd even argue it's free software - the team has managed to create some confusion about distributing modified binaries - with regards to using the servers Signal operates -- but have clarified that it is indeed ok to build your own binary from the source they provide, and use their servers.

I'm not quite convinced about their argument for official app store distribution and updates, but I can understand the argument.

> it requires Google Play services

I'm fairly sure the iOS client doesn't depend on Google Play Services. Sticking with an app store does require trust in the provider though.

Whisper System have made some fairly clear choices, and while it's perfectly fine to disagree, I think it would be best to avoid FUD.

It certainly strikes me as one of the better options for pragmatic secure messaging, that allows for a fairly narrow and reasonable set of threats (Google/Apple/Microsoft (possibly more than one of each, depending on your platform), Whisper Systems themselves, probably most state actors).

The other reasonable option I'm aware of (that make slightly different trade-offs), is ChatSecure/zom.im (where zom.im is a "friendly" fork of ChatSecure).


> I'd even argue it's free software

Terminology. What you call free software I call open source. As you go on to mention, you can see the source but not use it in any meaningful way. In particular:

> but have clarified that it is indeed ok to build your own binary from the source they provide,

Exactly. Your own binary. From their source.

Build your own binary for someone else, and it's "malware", as the guy had the nerve to call F-Droid in that bug report (here: https://github.com/WhisperSystems/Signal-Android/issues/53). That sort of bad faith, coming from a known liar (see my other reply) is what I really cannot condone.

> and use their servers.

Yeah, similarly. Use a source other than theirs or servers other than theirs and they start whingeing.

That is not open source.

> I'm not quite convinced about their argument for official app store distribution and updates,

Possibly because every time it's a different excuse?

> but I can understand the argument.

Yes, so can I: they want to control the platform so that it is their users, so that they can sell it to someone else, like they did last time.

And I would be perfectly fine with that, if it wasn't done via lies, deception, and denigrating third parties, particularly the chaps at F-Droid who at least have the decency of using their real names (not to mention not seeing you as the product).

> Sticking with an app store does require trust in the provider though.

Agreed. How high is Google in your "trusted" list? Yes, I'm picking on Google because it's a bit of an easier target than Apple, but still.

> I think it would be best to avoid FUD.

I agree, and that's precisely why I feel the need to speak up. I challenge the honesty not of their enterprise (which is no different from that of Skype, Whatsapp, or any other player) but of the way they are pursuing their goal. See above.

> It certainly strikes me as one of the better options for pragmatic secure messaging,

I don't know. As mentioned elsewhere, XMPP meets all my requirements and is not vendor-dependent. But the availability of options depends on each user's definition of things like "pragmatic" and "secure" (and even "messaging" for that matter!)

From seeing what's out there though, it appears that modern versions of Whatsapp (which I don't use, I'm FOSS-only) offer essentially the same capabilities as this application though, including end-to-end encryption. And of course, essentially the same disadvantages. I could be mistaken here though.

> that allows for a fairly narrow and reasonable set of threats (Google/Apple/Microsoft (possibly more than one of each, depending on your platform), Whisper Systems themselves, probably most state actors).

I guess it also depends on each user's definition of "fairly narrow and reasonable". :-)


While you might claim that running an ASOP derivative you need to trust Google less (and in turn trust something like f-droid more, perhaps) -- if you want a chat/im client on an Android device it's hard to see how Google isn't already one entity you need to trust (along with a list of hardware manufacturers).

As for your other comments - you may run your own server infrastructure from same or derived sourced, your own derived clients, distribute binaries etc - but you can't dilute the brand. Similar with Debian cloud images for example.

I'm not sure how that's "not FOSS".


> It's 2016 and our best crypto messenger options are worse than what we had 10 years ago when Skype was peer to peer, or Jabber with federation.

Actually, Jabber with OTR is pretty solid. If need be, you can use throwaway addresses too.

> I can understand the reasons for not supporting federation, but I disagree.

I disagree and can't understand the reasons, which however I suspect to be rather shortsighted. Just imagine if email had not been what came to be called a federated system.


For me, the takeaway from that article is this:

> Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we’ll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Indeed, we still "don't know" how to test—more generally, and given the abundance of methodologies and their tendency to go through a hype and dump cycle, I would say we still "don't know" how to write code in the first place.

We'll get there eventually, but for now I would take whichever approach, methodology, tools, and language that I use as having a "best before" date, and invest in it accordingly.


> How do you distribute the one time pad in the first place? If you do it insecurely, it's a waste of time. If you can do it "securely", why not just use that secure channel to send the message in the first place?

Because you may not have any messages to send at the time of the secure exchange of OTPs. Do note that one time pads are (or at least were) commonly used in the military.

> But the question is, how do we get to the point where you know that you have the correct keys and you can trust them?

That is not a technological problem per se, but rather a social one. Imagine that when you exchange phone numbers (or Farcebook IDs, if you're into that) with your work colleagues, or friends, or fellow attendees at that developer meetup, you also exchanged public keys.

Mechanically, the interaction is at about the same level of complexity, and effectively, as has already been mentioned, the web of trust already exists (Farcebook, ChainedIn, and all the other bollocks).

If any of those decided to implement secure end-to-end comms using PGP and offered you the possibility of uploading your public key for dissemination to your "friends", PGP might become ubiquitous in a matter of weeks. At a smaller scale, German email provider GMX is doing exactly this, by the way.


> all the experts here

Which experts? And what are those fundamental flaws?


> Just get the national government to distribute RSA USB keys to every citizen.

I lived in a country that did exactly that. And it was a disaster. The keys were trivially easy to steal, even by accident (personal experience here), and you still have the same trust problem as before, except that with a central authority now you do not have as much control.

I have also used the electronic-signature-comes-with-your-ID-card thing, and it was a similar disaster, with dodgy drivers and half-arsed crypto implementations in common software. E.g., try using the same token in Firefox and Thunderbird (or anything else) at the same time.

PGP is fine. It's just that proper security is not easy. And the same applies in the physical world as much as in computing.


It's 2FA, it's doesn't rely ONLY on the key for authentication and a token can be revoked easy if stolen.

The implementation and the technology has some challenges to be executed. Just like everyone tech projects, that doesn't have GooMicroZon people. Nothing special ^^


>The keys were trivially easy to steal, even by accident

So distribute keys on smart cards that don't allow you to export the key. This is what Estonia does, and - concerns about their election infosec aside - it seems to work pretty well.


> So distribute keys on smart cards that don't allow you to export the key

That's what I covered in the second paragraph. :-)

The thing is, both those implementations were a disaster from either a technological or a security point of view. We're not even getting into whether a central source of trust is a good idea or not (you will look at the state of HTTPS and make up your own mind on that). So, to repeat, proper security is hard.


> I lived in a country that did exactly that.

Which country?


Interesting.

If you don't mind me asking, what country?


Exactly. I've had this happen with Dell. Twice, same computer. They ask for the defective drives back, which is fair. I informed them that as they contained company confidential information they would be put beyond use and they were fine with that--though I'm not 100% sure the phone drone understood the implications.

Cue a pneumatic drill and a circular saw. Modern drives are surprisingly hard to drill, btw, what with being so dense.

After the second replacement failed too after a couple of weeks, I just went and bought a disc from the local shop down the road. It's probably still going, twelve years latter. We never bought Dell again.


That's funny. I had a problem with my ADSL connection that lasted for over a month. I spent two hours and forty minutes on the phone, spread over, IIRC, 15-20 calls, with the cycle going 1. "What's the problem", 2. "Blah (upstream network problem on their side)", 3. "Have you reset your modem?", 4. "Yes, and I've also done blah, blah, blah, and blah", 5. OK, can we call you back? 6. "Yes". 7. Go back to 1.

After three weeks of this my patience was running thin and, before reporting the company to the telecommunications regulator, I thought I would give them one last chance and (very politely) emailed the CEO. When I checked my phone half an hour latter I had three calls and two emails from the guy in charge of quality (this is, btw, one of the five biggest telecoms in the world). From this point, it took them three days to fix the problem (a network systems upgrade gone wrong, not trivial), and finally got an email from this quality guy on a Saturday at 19:30. The current CEO did say he was going to make customer support a priority, and it seems he took that to heart. Can't fault him (the quality guy appears to have taken a massive chewing, by the way).

I have to say, I did stay polite and sympathetic throughout. It is not the agents' fault that they have a shit CMS, and it's just not cool to be rude anyway.

Besides, I've got that customer support T-shirt, albeit on a very specialised industry, and I've learned that the customer is most definitely not King. We had a "difficult" user once, who kept wasting our time and being rude to the other guys (not me for some reason), and this ended with my boss calling the customer's boss and telling him he would cancel their licence if this guy calls again. The recalcitrant user got let go that same day.


> I haven't seen the third step in the "Cue -> Habit -> Reward" cycle mentioned yet.

Good point! I allow myself an Irn-Bru only on run days (and less than 10 miles doesn't count). I don't live in Scotland and Irn-Bru is particularly hard to find, which is part of the incentive. Gives me a sense of accomplishment anyway!


Ironically, long distance running (10+ miles) is easier to do before breakfast, as an empty stomach is useful--digestion stops anyway as soon as your body is in need of some extra red cells to carry more oxygen. A trained person has enough ATP reserves for about 90 minutes at threshold intensity, and anyway the whole process of ingesting, digesting, and metabolising a regular meal takes many hours (around twelve in my case), so having breakfast before a run is purely a psychological boost--bar quickly metabolising food, such as banana or sport gels, which you do ingest during the run anyway.

All this of course depends on the individual's physiology and psychology.

For me, I tend to run either before breakfast or in the wee hours of the morning (one/two o'clock) before going to bed.

Mind, I'm an ultra runner, where being hungry and sleep deprived for days is the name of the game, so I've overcome that particular psychological barrier.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You