For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | richadams's commentsregister

There are some good additional resources referenced in the docs here: https://response.pagerduty.com/resources/reading/

Specifically, Google's SRE books are particularly useful (https://landing.google.com/sre/books/) along with the book "Incident Management for Operations" (http://shop.oreilly.com/product/0636920036159.do) and Etsy's Debriefing Facilitation Guide (http://extfiles.etsy.com/DebriefingFacilitationGuide.pdf).

The book "Comparative Emergency Management" (https://training.fema.gov/hiedu/aemrc/booksdownload/compemmg...) is also quite interesting, as it compares the emergency management practices of about 30 different countries.


https://spectreattack.com/

Information site with some more information, and links to papers on the two vulnerabilities, called "Meltdown" and "Spectre" (with logos, of course).

(https://meltdownattack.com/ goes to the same site)


Both domains were registered on 2017-12-22. Given the planned disclosure on 9th January that Google mentions and MS and others coding patches silently [1], do the early reports [2] of kernel patches, does this mean that due to coding in the open the whole disclosure procedure has been vastly accelerated?

I wonder how the timing relates to New Year and many companies having holidays in CW1.

[1] https://lists.freebsd.org/pipermail/freebsd-security/2018-Ja...

[2] https://news.ycombinator.com/item?id=16046636


Accelerated, but not vastly. Google's post says "We reported this issue to Intel, AMD and ARM on 2017-06-01", so the embargo still ended up holding for 7 months, even with it ending a week early. The domain registration dates of 2017-12-22 seem to be just when Google started to prepare for releasing the publicity materials, not when the vulnerability was discovered.


The Google Security Blog post actually says that the open development did not cause the early breakdown of the embargo in the last 1-2 hours, but

> We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation. The full Project Zero report is forthcoming.


The problem isn't "it's not bought forward by that much relatively" in as much as you have an agreed timeline to have coordinated patches (e.g so one org doesn't push a fix before other orgs have). So if you have a bunch of orgs set up to do a release on day X, and then publish on X-[whatever] then you are effectively zero-daying.

Is it super important in this case? shrug.

But imagine for the sake of argument there was some undocumented cpu behaviour "if instruction x,y,z are executed in that order with these constants then catch fire", then having anyone pre-empt the agreed update time could be bad.


Sorry to be daft, but hasn't the Google Zero team jumped the gun on the coordinated disclosure date by publishing their blog post 6 days in advance?


Some researchers had independently create and demonstrated working PoC based on the linux patches they saw which read kernel memory from user space. At that point it was already public.

After that its all about PR and getting people prepared for the magnitude and impact early.

Also to let people know that patches that were already available can be used (restarting GCP/AWS instances, SPI on chrome).


I feel like the Meltdown logo was done by a real designer, and Spectre was designed by a bored developer.


From the site:

> Both the Meltdown and Spectre logo are free to use, rights waived via CC0. Logos are designed by Natascha Eibl.


It says at the bottom they were both done by the same person.


That's funny, but also makes me wonder how you get contracted to do logos for things like this. Based strictly on her LinkedIn, she doesn't work for Google. Maybe a friend of someone? Kind of a cool gig though.


https://www.linkedin.com/feed/update/urn:li:activity:6354450...

says:

> Want to know what's really going on with the Intel security flaw everyone is talking about? Checkout https://meltdownattack.com to get all the details. This is my boyfriend's and his research team's latest work. An huge security breach which affects nearly all your computers! Stealing all your secrets never was that easy!


I thought the presence of a branch in the logo was clever.


I just noticed that. That is pretty clever.


A fiber cut in Oregon could be responsible (https://puck.nether.net/pipermail/outages/2015-June/007906.h...). I've been seeing connectivity issues with us-west-2 for most of the day.

There was also a fiber cut in San Francisco area this morning (http://www.usatoday.com/story/tech/2015/06/30/california-int...).


They're currently beta testing a new (more modern) site: https://beta.united.com/ual/en/us/


I like Virgin's https://www.virginamerica.com/ way more than even their beta one. I guess they're going for different demographics.


"Bugs that are eligible for submission: ... The ability to brute-force reservations, MileagePlus numbers, PINs or passwords"

"Do not attempt: ... Brute-force attacks"

This seems contradictory. I assume the intent is to not allow DoS attacks (although they call that out separately further down the list)?


Not exactly.

Seems they're saying they'd accept a bug that can be caused by brute-force, but do not actually attempt a brute force yourself.

But yeah I'd guess they don't want intentionally invite a bunch of people to DoS the site.


Another interpretation is that, if you discover something similar to what Weev discovered, do not do what Weev did.


Doh, I feel stupid now. I only looked at bandwidth costs, not the request prices. That's what I get for editing my post late at night based on reading, instead of based on personal experience.

For low bandwidth, you're absolutely right, the costs are at best the same. For high bandwidth however (once you get above 10TB), CloudFront works out cheaper (by about $0.010/GB, depending on region). But that wasn't taking into account the request cost, which as you point out, is more expensive on CloudFront, which can negate the savings from above depending on your usage pattern.

I'll update my post accordingly, thanks for pointing this error out!


You do have to pay for S3 to CloudFront traffic, so really you're paying twice. (Although the S3 to CF traffic might be cheaper than S3 to Internet, according to the Origin Server section on the Cloudfront pricing page.) http://aws.amazon.com/cloudfront/pricing/

Also, S3 buckets cannot scale infinitely. They have to have their key names managed appropriately to do it. http://aws.typepad.com/aws/2012/03/amazon-s3-performance-tip...

Finally :) I like SSH. But I'm the founder of Userify! http://userify.com


Yup, this was the intention. You could still allow your automation processes SSH access, just disable it for your users.

The idea is that if a user can't SSH in (at least not without modifying the firewall rules to allow it again), it will force them to try and automate what they were going to do instead. It worked well for me, but it's probably not for everyone.


I'll admit I hadn't really look at this in depth, using S3 without a CDN solved a particular use case I had a while ago, and it just seemed unnecessary to add a CDN in front of it. I've been doing some reading today, and it seems I was wrong. Adding a CDN in front adds lots of benefits I didn't know about!

I'll update the article soon to add in the new information.


Just disabling inbound SSH connections, the servers can still SSH out to other systems to pull in files, configurations, clone git repos, etc.

It's just a way to stop yourself from cheating and SSHing in just to fix that one thing, instead of automating it.


except that some automation frameworks rely on inbound ssh access to the machines. ansible would be an example of such a framework, in its default configuration at least.


Ah, I wasn't aware of that, very good point!

The goal of the tip is really to stop users SSHing in just to fix that one little thing, so you could still allow your automation frameworks SSH access and just disable it for users.


It can also be useful to SSH into a system to check what's going on with a specific problem. Sometimes weird things happen that you can't always anticipate or automate away.


I think the problem is that I've made it seem like a strict rule in the article; "You must disable SSH or everything will go wrong!!!". It's really just about quickly highlighting what needs automating. Like you say, sometimes you just want to diagnose your problems manually, and that's fine, re-enable SSH and dive in. But if you're constantly fixing issues by SSHing in and running commands, that's probably something you can try to automate.

Personally I always had a bad habit of cheating with my automation. I would automate as much as I could, and then just SSH in to fix one little thing now and then. I disabled SSH to force myself to stop cheating, and it worked well for me, so I wanted to share the idea.

Of course, there's always going to be cases where it's simply not feasible to disable it completely. It depends on your application. The ones I've worked on aren't massively complex, so the automation is simpler. I can certainly see how not having SSH access for larger complex systems could become more of a hindrance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You