For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more revertts's commentsregister

This is done by some apps today, though the motivation was typically network perf (http2 and then QUIC support). The stacks are large - on iOS it's difficult to take over a portion of networking without rebuilding a substantial amount of it, so you'll have TLS, a full http stack, plus all supporting logic for connection pooling, etc. The closest thing to an open-source, drop-in option like this is cronet, the networking core of chromium packaged as a standalone library. Last I looked it was multiple megabytes in size, which is still a substantial cost for iOS apps. They can also be quirky to use because they fight against the system's defaults in some areas and can cause other inefficiencies (typically outweighed by the network improvements).

I believe Uber talked publicly about adopting cronet, and Facebook gave a talk about mobile proxygen (though it is not open-source). If you pop open the Netflix and Youtube apps, you will likely see the same.


I read their point as “information hiding is not a concept exclusive to OO.”

Whether or not the original quote meant to imply that I’m not sure, but do see how it could be read that way.


It looks like there's a single 5-star review by "Benedict B." Did you review your own app?


I don’t have the answer to this, but giving your own app 5 stars seems totally reasonable and fair. It’s like voting, everyone has one vote, why on earth would you not vote yourself if you’re on the menu?


I think the small upside (a single 5-star rating contributes very little to app ranking) is outweighed by the potential downsides (it can be a turnoff if the user sees that because it strikes of astroturfing, which seems a hot topic these days; it violates app store guidelines, though there's a close to zero chance of that being enforced).

To me it seems like unnecessary risk for little benefit.

Edit: And product reviews are not the same as voting. :)


There are cases where you need them, but it's relatively rare, they definitely shouldn't be your first choice.

I'm fuzzy on all the current Linux use cases, but do remember one of the main users of rb trees was CFS. It's a neat scheduling algorithm.

Java 8 specifically: HashMap is implemented with linked list chaining. This is already not very performant, but you can only do so much with Java being a reference heavy language. RB trees are used if the bucket chain grows excessively long - so it's addressing an edge case, speeding up some worst case scenarios.

Dense hash tables based on open addressing outperform bucketed chaining. Look also at abseil's swiss table or folly's f14 if you want to see how they've further advanced to take advantage of the hardware.

In general: flat, dense, linear structures are king for performance.


>you can only do so much with Java being a reference heavy language.

This is definitely not true, implementing array based hashtable is trivial and outperforms in most cases java.utl.HashMap. It's just the decision to have LinkedHashMap (in java 1.2) extending HashMap crippled the latter. The issue has been discussed quite a few times in java core mailing list.


I agree, HashMap is unnecessarily hamstrung because the API requirements push it into a bucket chained implementation (C++ also made this mistake and is one of the downsides of std::unordered_map). And you can definitely write a faster implementation with an array based hashtable.

But I still stand by "you can only do so much being a reference heavy language." Unless you stick to purely primitive types, implement the hash table off heap, or Project Valhalla bears fruit, it's hard to get the data layout you'd want for a really good implementation. So I agree it can be better, but it's going to be hard to get to best - hence my comment.


I do agree data layout is hard - pretty much direct byte buffers (off heap) and some poor man's memory manager (plus serialization/desirilization code).

Project Valhalla and structs is something people have been asking for since Java 1.4 or so. Still nowhere near. And indeed, "best" would take a custom implementation, case by case. It's doable, but usually very far from pretty.


I was not commenting on whether one should use it or not, just that it is still used and picked as recently as Java 8. We are in agreement for the rest of your post. :+1:


Ah! I'm sorry, I had misinterpreted your original comment.


What's the best way to keep an eye out for that TR? Periodically checking http://www.open-std.org/jtc1/sc22/wg14/ ?

I can't ever tell if I'm looking in the right place. :)


If you're interested in the final TR, I would imagine we'd list it on that page you linked. If you're interested in following the drafts before it becomes published, you'd fine them on http://www.open-std.org/jtc1/sc22/wg14/www/wg14_document_log... (A draft has yet to be posted, though, so you won't find one there yet.)


> Put everything in a bowl

Care to share your measurements? I’m aiming for something similar, but having trouble getting my ratios dialed in.


How strange - OpenDNS/Cisco Umbrella seems to flag the domain and gives me a 403 Forbidden.


RE: NSDI - Is the Firecracker paper available somewhere, or not yet?

Edit: I'm an idiot - https://www.amazon.science/publications/firecracker-lightwei...


The problem isn't with terminating SSL, it's with keeping your keys safe on exposed infrastructure.

A single domain name and DNS to route is uncommon because it doesn't give you fine-grained control of load - you need to be mindful of the rack's capacity, and you also need to make sure that most of that ISP's customers go to the rack/people who aren't that ISP's customers don't go to it.

Anycasting isn't going to be great for traffic management or long-lived TCP conns, and if you can avoid the complexity of each rack needing a bgp session into the ISP's network you're going to be much better off.

Typically this is going to be directly routed to the rack via a unique DNS name after some form of service call.


It's a very good question! Figuring out security for these remote racks is one of the hardest parts. For some CDNs their larger POPs will be locked cages with security cameras and all, but these small deployments are significantly more exposed.

Exactly how SSL is handled (or not) varies by provider, but one thing I'll mention is that these will typically never have a cert so important that it can't be easily revoked, and the most important data will likely not be flowing across them - exposure is very limited. Using video streaming as an example, one option is to only do TCP termination for example.com (or to not even terminate that domain on the local cache, but back at your main datacenter), then use subdomains with individual certs for the local cache (eg. isp1.cache.example.com). In that case, service calls like login, retrieving the manifest, etc. are secured by the certs you're keeping in your primary dc, then the manifest has a set of https://isp1.cache.example.com URLs pointing to the local cache only for video segments.

Another tricky aspect is making sure that your main network treats them as untrusted so someone with local access can't use it to get a foothold into the rest of your infra.


>these will typically never have a cert so important that it can't be easily revoked

I think this is specifically addressed with the introduction of TLS Delegated Credentials[1]. This allows the CDN edge to use a very short lived credential in the place of the certificate's private key.

It's already supported in evergreen browsers and in certificate profiles from commercial CAs like Digicert.

1. https://tools.ietf.org/html/draft-ietf-tls-subcerts-06


Yup! Some of the names on that draft were people who have previously worked on building these sorts of edge racks, so their experience with this infra helped shape the proposal. It'll be great once it's broadly supported, but that's going to take awhile (or depending on your client mix, an eternity).


Correct me if I'm wrong, but my understanding is that revoking a certificate is usually not a very effective mitigation for a stolen cert. Since many clients don't check for revocations.


That's correct - 'revokation' in this case would likely involve rolling the DNS name to something different. Since these racks tend to have precise targeting (ie. not dns gslb) and non-user facing names, there's more flexibility.

The delegated creds draft that regecks mentioned is also relevant. That will make issuing lighter weight, so this sort of 'burn the cert and roll the DNS name' procedure becomes significantly cheaper operationally.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You