Both domains were registered on 2017-12-22. Given the planned disclosure on 9th January that Google mentions and MS and others coding patches silently [1], do the early reports [2] of kernel patches, does this mean that due to coding in the open the whole disclosure procedure has been vastly accelerated?
I wonder how the timing relates to New Year and many companies having holidays in CW1.
Accelerated, but not vastly. Google's post says "We reported this issue to Intel, AMD and ARM on 2017-06-01", so the embargo still ended up holding for 7 months, even with it ending a week early. The domain registration dates of 2017-12-22 seem to be just when Google started to prepare for releasing the publicity materials, not when the vulnerability was discovered.
The Google Security Blog post actually says that the open development did not cause the early breakdown of the embargo in the last 1-2 hours, but
> We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation. The full Project Zero report is forthcoming.
The problem isn't "it's not bought forward by that much relatively" in as much as you have an agreed timeline to have coordinated patches (e.g so one org doesn't push a fix before other orgs have). So if you have a bunch of orgs set up to do a release on day X, and then publish on X-[whatever] then you are effectively zero-daying.
Is it super important in this case? shrug.
But imagine for the sake of argument there was some undocumented cpu behaviour "if instruction x,y,z are executed in that order with these constants then catch fire", then having anyone pre-empt the agreed update time could be bad.
Some researchers had independently create and demonstrated working PoC based on the linux patches they saw which read kernel memory from user space. At that point it was already public.
After that its all about PR and getting people prepared for the magnitude and impact early.
Also to let people know that patches that were already available can be used (restarting GCP/AWS instances, SPI on chrome).
That's funny, but also makes me wonder how you get contracted to do logos for things like this. Based strictly on her LinkedIn, she doesn't work for Google. Maybe a friend of someone? Kind of a cool gig though.
> Want to know what's really going on with the Intel security flaw everyone is talking about? Checkout https://meltdownattack.com to get all the details. This is my boyfriend's and his research team's latest work. An huge security breach which affects nearly all your computers! Stealing all your secrets never was that easy!
Doh, I feel stupid now. I only looked at bandwidth costs, not the request prices. That's what I get for editing my post late at night based on reading, instead of based on personal experience.
For low bandwidth, you're absolutely right, the costs are at best the same. For high bandwidth however (once you get above 10TB), CloudFront works out cheaper (by about $0.010/GB, depending on region). But that wasn't taking into account the request cost, which as you point out, is more expensive on CloudFront, which can negate the savings from above depending on your usage pattern.
I'll update my post accordingly, thanks for pointing this error out!
You do have to pay for S3 to CloudFront traffic, so really you're paying twice. (Although the S3 to CF traffic might be cheaper than S3 to Internet, according to the Origin Server section on the Cloudfront pricing page.) http://aws.amazon.com/cloudfront/pricing/
Yup, this was the intention. You could still allow your automation processes SSH access, just disable it for your users.
The idea is that if a user can't SSH in (at least not without modifying the firewall rules to allow it again), it will force them to try and automate what they were going to do instead. It worked well for me, but it's probably not for everyone.
I'll admit I hadn't really look at this in depth, using S3 without a CDN solved a particular use case I had a while ago, and it just seemed unnecessary to add a CDN in front of it. I've been doing some reading today, and it seems I was wrong. Adding a CDN in front adds lots of benefits I didn't know about!
I'll update the article soon to add in the new information.
except that some automation frameworks rely on inbound ssh access to the machines. ansible would be an example of such a framework, in its default configuration at least.
The goal of the tip is really to stop users SSHing in just to fix that one little thing, so you could still allow your automation frameworks SSH access and just disable it for users.
It can also be useful to SSH into a system to check what's going on with a specific problem. Sometimes weird things happen that you can't always anticipate or automate away.
I think the problem is that I've made it seem like a strict rule in the article; "You must disable SSH or everything will go wrong!!!". It's really just about quickly highlighting what needs automating. Like you say, sometimes you just want to diagnose your problems manually, and that's fine, re-enable SSH and dive in. But if you're constantly fixing issues by SSHing in and running commands, that's probably something you can try to automate.
Personally I always had a bad habit of cheating with my automation. I would automate as much as I could, and then just SSH in to fix one little thing now and then. I disabled SSH to force myself to stop cheating, and it worked well for me, so I wanted to share the idea.
Of course, there's always going to be cases where it's simply not feasible to disable it completely. It depends on your application. The ones I've worked on aren't massively complex, so the automation is simpler. I can certainly see how not having SSH access for larger complex systems could become more of a hindrance.
Specifically, Google's SRE books are particularly useful (https://landing.google.com/sre/books/) along with the book "Incident Management for Operations" (http://shop.oreilly.com/product/0636920036159.do) and Etsy's Debriefing Facilitation Guide (http://extfiles.etsy.com/DebriefingFacilitationGuide.pdf).
The book "Comparative Emergency Management" (https://training.fema.gov/hiedu/aemrc/booksdownload/compemmg...) is also quite interesting, as it compares the emergency management practices of about 30 different countries.