For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | fictioncircle's commentsregister

You don't even need to do that.

Just have people opt in for NSFW content on their accounts. Leave them opted out by default (or public).

There is no reason it shouldn't be a human controlled flag.


There is a Mature filter on imgur already that's human-determined. The problem is that people posting pictures will mark things mature unnecessarily, and there have been trolls who put up hardcore pornography without marking the image as mature. The problem is, as always, trusting humans to do the right thing without massive consequence for doing the wrong thing.


I agree that should be the default but getting the data to do it automatically is hard if you're going to ask users to mark things NSFW on their own accord.


No VPN can reliably anonymize you against government agents so I think the con is a non-issue. VPNs are only really useful when the local network is hostile and/or you want some degree of privacy from the sites you visit.

Anyone with sigint capability is going to figure out who you are with a VPN. (i.e. Government agents)


This isn't about pretending to be James Bond, and "hostile" networks with "government agents" and all their "sigint" coming for your secrets.

In the real world, VPN are mostly used to download copyrighted material. They have a pretty much perfect track record in that regard. Running a cloud VPS, on the other hand, is no more secure than your ISP: they have records, and will share them when ordered to do so.


Just as far as everybody knows, that doesn't mean that it's necessarily true.

There can be a distinct difference between what is believed to be true and what's actually true.

There might very well be entities that has the information, but it's sitting on it for later use.


This is a trade I'm willing to accept. I trust a VPS provider more with those records than I trust either Verizon or Comcast, both of which are motivated by advertising.


This isn't remotely true. Pretty much every journalist uses a commercial VPN.


Now, now. I assure you once the permitting issue is resolved I'll deliver your bridge.

I just need another $10,000 :p


Buy parts and run your own firewall/dns setup to drop anything odd.

Its honestly the only way to be "sure" if worry about a manufacturer doing that sort of thing. It won't be perfect but the odds of someone targeting you for hardware spyware is prettttttttty low. And most manufacturers of comp enthusiast parts know its suicide to do it mass-market like that.


> Isn't that just saying "I don't believe in multiuser systems and/or their security models"...? If so, what specifically do you have against them?

[Core Services] + SSH is generally something you can harden effectively against attacks.

[2903429034902323094230 binaries] is something you generally struggle to maintain security patches/etc on.

The simple fact is, there is just too much attack surface on a vanilla Linux box once you have an account that you can reliably do EVERYTHING you need to do to secure it 24/7/365.

At least imho, given my time constraints/budget.


> [2903429034902323094230 binaries] is something you generally struggle to maintain security patches/etc on.

Does a modern Linux come with that many binaries SETUID?!?

Sorry, I couldn't resist. :P But yea, I see your point. It's a bit sad though; that all these people's hard work (e.g. OP) is all in vain...


> that all these people's hard work (e.g. OP) is all in vain...

Part of the reason people's hard work is in vain is because any time the topic of doing things better comes up, a cluster of developers will insist there's no point in improving e.g. filesystem because "once they're on the box, you're screwed." So it becomes a self-fullfilling prophecy.


Yea, I get a similar feeling...

In general there seems to be quite diverse opinions out there about "security", and a lot of the space seems occupied by "extreme pragmatics" (or even "anti-intellectuals"). E.g. lots of people feel it's warranted to peddle (simple) falsehoods instead of trying to understand (complex) problems. I can understand it may be the right approach from a day-to-day IT management perspective, but I'm not so sure it's the most viable path towards better security long-term.


> I can understand it may be the right approach from a day-to-day IT management perspective, but I'm not so sure it's the most viable path towards better security long-term.

Yeah, this is why I had the caveat:

> At least imho, given my time constraints/budget.

The "best" long term path is to have larger security budgets that allow for the objective you and the other folks who dislike my response want. The problem, frankly, is we just aren't there yet.

For instance, our budget for maintaining security is ~5% of the IT budget. A large portion of that goes to perimeter defense appliances (firewalls, barracuda antispam/antivirus filters, etc.) as well as making sure ublock, anti-malware, etc are installed on every machine. The other major chunk ends up in securing WAN-facing services that can be exploited remotely. The last major chunk is user training to get them to stop doing things like pay bills for services we never purchased, clicking on strange links, running strange attachments, etc.

After that, we have no resources to do more than run apt-get update && apt-get upgrade -y for protecting the attack surface once an account is breached. We've got a few things we had to re-compile ourselves manually and break with that process so we moved them out of the package manager for the OS. Our actual applications we build internally also likely have exploitable vulnerabilities if attacked from a local account. Those items never have the budget to be maintained and we certainly wouldn't survive someone taking over a local shell account.

I suspect given this is (roughly) the situation every place I've worked at, its simply too common to be an issue.


> "once they're on the box, you're screwed."

The other one I see touted everywhere far too often is "false sense of security." Because improving the security of one thing can't possibly help when there could be a dozen other things that may yet be vulnerable. No, instead of helping, fixing that one thing lulls you into believing you are safe and that alone makes it all less secure. :-)


If you can audit the code yourself, you can treat it as code you authored. (assuming you are competent to perform the audit)



Valid point.

https://github.com/HainaLi/horcrux_password_manager

It is in JS at least. Underhanded C is likely an easier trick to manage.


True, in theory, though in practice, i know plenty of capable people but almost none of them bothers to read the openssh source (or even a subset, like recent changes) before updating or recompiling.


Make sure you read the code of the compiler you're using as well, and bootstrap/compile it from that source instead of trusting an existing compiler binary.


Why stop with compilers? Inspect the circuit diagrams for all your hardware and then make sure the actual manufacturing followed the designs to a tee.


I mean I know it hyperbole but I am pretty sure there are hardware bugs that allow access, see that Intel or IBM remote management disclosure. It might not a real backdoor but it's as good as one. As people above are mentioning keep your paranoia inside your threat model


I wish I could find the story where someone actually had this issue.

Basically, the story was that a program for grad research was inserting all kinds of nasty, anti-semetic things into text and it turned out the previous grad student had poisoned the compiler which was modifying the strings and was able to re-poison it every time through something else.

I forgot the exact details but it is an amazing read.


If you're able to find that, I'd love to read it.



Was a great read -- thanks!


> True, in theory, though in practice, i know plenty of capable people but almost none of them bothers to read the openssh source (or even a subset, like recent changes) before updating or recompiling.

Then they aren't paranoid but normal folks, eh?


What do you mean by "treat it as code you authored"?


For purposes of security paranoia, if you can perform a security audit on open source code it is just as good as any other code you've written.

Idk about other people but I find anything I don't find security holes in myself "as good" as anything I've written. I've got the same set of assumptions/blinders/competence either way.


And this is why we need online anonymity, to be perfectly honest.

Its too dangerous to be honest under you real name and has been for years.

Its alot like Roko's basilisk that way. Once you know the capability exists, you have to destroy it or help it. There isn't really any middle ground.


> Its alot like Roko's basilisk that way. Once you know the capability exists, you have to destroy it or help it.

I sort of wish Roko's hadn't played out as such a joke, because the general sentiment is actually a really under-appreciated one.

There are all kinds of settings where the best outcomes are gained by either preventing a thing or enabling it - and succeeding. Revolutions seem like the obvious case, where the highest payoffs accrue to the vanguard revolutionaries (if they win) or the establishment (if they win). Various doomsday cults in fiction also count, where people produce a bad outcome on the logic that if someone else does it first, that would be even worse.

It's actually really nice to have the idea of something which is sensible to restrain, right up until it gets out of control and turns on the people who restrained it.


> Roko's basilisk

Curiosity killed the cat. Now I have to decide whether to destroy it or help it..


To be fair, doing so would require a bunch of cache invalidations and they've always barely had enough money to limp on to the next investor. It would also likely be abused.

Its quite possible many features simply would raise their costs by .X% and therefore were impossible for that reason.


Yeah but tbh, its still not e2e encrypted. It just means WhatsApp is ignorant.

So they are in the clear legally, but morally, its still dubious to do that given its effectively disclosing what is often a substantial portion of the conversation.


The message contents are e2e encrypted.

And, transmitting an URL usually has no use beyond accessing it. They are doing what the user expects, it's just lacking some communication and power-user tools to override the default behavior.


> And, transmitting an URL usually has no use beyond accessing it. They are doing what the user expects, it's just lacking some communication and power-user tools to override the default behavior.

Let us just say there are certain things that can cause legal complications merely accessing it and not reporting it is technically still a crime.


I'm not sure that's what a user would expect; if anything, I'd think users would expect the opposite, that internet requests potentially with identifying information are not being sent to third parties based on what they've typed into the box without hitting send.


The system is clearly passing message data, not metadata, back to the app servers in the clear. Full stop.


I use the San Jose facility for my Linodes. I haven't had a problem since the rolling power outages years ago where Hurricane Electric's backup power failed to kick in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You