For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | 220's commentsregister

Do y'all have any way of tracking feature for custom images and machine types for GCP?

Cloud ML feels not nearly ready, and but being able to do something like this on GKE would be magical.


I don't think SRV records are the right answer; your networking layer should be k8s aware and issue a watch command on Endpoints; it'll be updated immediately-ish when the servers changes. This is similar to how finagle's zookeeper server set is supposed to work.

What linkerd buys you is you don't have to write this type k8s-aware, zipkin logging library for every language you're running in production. But I think it's straddling a very narrow section; small users shouldn't care about this and just rely on the round robin, zipkin is a PITA to run anyway. Large users will probably want to write their own libraries (zipkin is a lot better if you put traces in your process)


> I don't think SRV records are the right answer;

OK, A/AAAA are more common.

> your networking layer should be k8s aware and issue a watch command on Endpoints;

Or, I could use standard networking concepts and not build my entire app to suit one new thing that I may not want to stay with forever, or even more than just test.

> What linkerd buys you is you don't have to write this type k8s-aware, zipkin logging library for every language you're running in production

There are already TCP/IP, DNS, DHCP, &c libraries for all languages, and that aren't tied to a specific stack. Why should I build into this stack and not into a more common subset?


DNS for service discovery is fraught with perils. Many implementations don't respect TTLs, or don't actually use multiple records, or don't do something smart like power of two.

Even if you pick all your impls carefully, you have to wait for your TTL instead of a push mechanism for changes.

If you wanted to implement the watch as a library, code wise, it's maybe ~500 lines per language, using the k8s client lib. I could hammer it out in two days.


> operator-overloading enabling eDSL-creation

This is almost certainly in the negative column for me. SBT has the dubious distinction of being the only build system that takes me hours to figure out how to change a setting. Inevitably I have to fall back to reading the source, and given all the macro magic that can be difficult to unwind.

I've been using Pants, mostly because it's polyglot and has better multi-project support. The oss community is a lot smaller, and it has a fair number of problems around bugs, but I find the code much more approachable.


(disclaimer: post author)

where are you using Pants?

I would really love to be able to use Pants as well, but alas, once I got running on SBT and was faced with a bit of rough-edges in Pants public-facing adoption/documentation story, I had to abort :)


My current startup and my last one, both smaller orgs. I setup both build systems, and initially used SBT in the previous one. In both cases I think it helps that we had engineers from larger companies familiar with a working monorepo; if you've seen one done well you have some aspiration as to what to shoot for with pants, even if it's more than what's currently available.

I'd use SBT again for a locally contained, single project setup. Once you learn the arcana, it works well, has a ton of plugins, and the repl is nice. I don't think it scales well with new engineers or number of projects though.


All very reasonable, thanks.

I've used Blaze at Google and then Pants at Foursquare, so I've seen this sort of thing done, but yea, Pants seemed like it was going to take a higher level of commitment to set up / maintain, due to smaller OSS ecosystem around it, so I had to short-circuit that thread.

Also, putting everything in a monorepo is not really an option in my current OSS-focused setup, and I've come to have grave doubts about its desirability overall, after years of believing that it was the ideal way, but that's another discussion :)


Is there a risk model where you control the network enough to fake domain validation but only if the target initiates the request to Let's Encrypt?

Otherwise it doesn't matter if you use Let's Encrypt as the attacker could just initiate the validation regardless of your CA and end up with a valid certificate (which would still fail cert pinning)

Edit: Oh I see, it's a more about if DV should ever be green.


> FWIW, my personal website uses let's encrypt, so it would be yellow or worse.

This shouldn't effect your security stance.

There's a common misconception that you trust your private keys with your CA and they can somehow transparently MITM you. But they only have your public key, not your private keys, so they can't do that.

The security threat from trusted CAs is that they can MITM anyone, regardless of if you use them or not. BUT the attack isn't transparent, and things like cert pinning are effective in the real world from preventing attacks.


The attack is definitely transparent if you trust the CA that issued the MITM cert.


If you use cert pinning, like the DigiNotar/Iran/Gmail, you're still protected against a trusted CA, assuming you've communicated in the past, which is realistic for a real world attack.

It's an attack that's difficult to deploy because it's easy to detect if you're looking in the right places, and as soon as it's detected, you know the CA has been compromised, and the attacker loses a large investment.


It's not as difficult to deploy if you can only target specific users, but I agree with you. The problem with cert pinning is that it's hard to do, because, if you make a mistake, nobody can access your site for quite a long time...


We were a parse user for (many) apps, and tried to run parse server briefly before just letting all the features built on it die.

I think one of the major instabilities not mentioned the request fanout. On startup the parse ios client code could fan out to upwards of 10 requests, generally all individual pieces of installation data. Your global rate limit was 30 by default. Even barely used apps would get constantly throttled.

It was even worse running ParseServer, since on Mongodb those 10 writes per user end up as a sitting in a collection write queue blocking reads until everything is processed. It was easier to kill the entire feature set it was based on.

I know there were a ton of other issues but I've forced myself to forget them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You