Out of curiosity, how would you describe TCP in these terms? Does the TCP stack's handling of sequence numbers constitute processing (on the client and server both I assume)? Which part(s) of a TCP connection could be described as delivery?
From TCP's perspective it has delivered the data when read() returns successfully. This is also the point at which TCP frees the packet buffers. From the app's perspective, TCP is an at-most-once system because data can be lost in corner cases caused by failures.
(Of course plenty of people on the Internet use different definitions to arrive at the opposite conclusion but they are all wrong and I am right.)
I agree this is what you'd consider delivery. Also agree everyone else is wrong an I am right ;)
The similar question in TCP is what happens when the sender writes to their socket and then loses the connection. At this point the sender doesn't know whether the receiver has the data or not. Both are possible. For the sender to recover it needs to re-connect and re-send the data. Thus the data is potentially delivered more than once and the receiver can use various strategies to deal with that.
But sure, within the boundary of a single TCP connection data (by definition) is never delivered twice.
Unlike SO, it's common to have very situation-specific questions posted on YAQS. In fact, my team preferred random one-off questions to go through YAQS (our contact golink pointed you to a monitored YAQS queue) precisely because they're much more searchable (and scalable) than point-to-point chats.
So yes, searching for your GShoe error, and (assuming you found nothing) asking about it on YAQS is not a bad way to get help from some random faraway team.
I suppose it's partially because most team chats are locked down (invite-only). In a company with a reasonably open slack, you might be able to ask in #gshoe-team or search it for relevant conversations, but not at Google in my experience - and this is setting aside the issue of message retention.
BTW, I agree 24h retention was truly ridiculous. Most of my colleagues hated it - fortunately (probably as a result of this legal case!) they disabled it and now the default is 30d everywhere.
Regarding promo, community contributions are still very much an expectation. Being active on YAQS counts toward that. True, the promo committee isn't going to go looking for it, so your manager needs to agree YAQS is a level-appropriate community contribution and include that in your promo packet.
Disclosure: I left Google like, a couple weeks ago
Exactly this. It's not uncommon to ask a coworker how to do something, get a response, and then send them a follow-up CL adding that information to a playbook or doc somewhere so others can reference it.
Heck, sometimes I responded to questions with a CL adding that info to docs.
The internal search engine helps drive a lot of this, too - if you want to know how to do something, docs (via search) are like the #1 choice. So, everyone's pretty incentivized to make it a good resource.
Best/worst. I worked at an e-learning company that, thanks to ancient e-learning standards, really tried hard to figure out when a user is leaving a web-page to handle session exits gracefully.
sendBeacon was dead on arrival, for that purpose at least; iirc it's blocked by most ad blockers. (Or was, who knows with Manifest v3.)
Similarly we're unlikely to see sendBeacon truly replace tracking redirects, unfortunately. This isn't an area where tech firms are happy to see fine user controls.
I play Final Fantasy XIV, an MMORPG - apparently, supposedly, the peering connection between AT&T and FFXIV's US ISP (NTT) was particularly bad. [1]
This manifested as pretty severe connection issues for AT&T customers playing FFXIV. Except, it was a chronic issue that would only flare up when that particular connection point was stressed.
One of the easiest workarounds? Hop on a VPN.
That's one example. Anecdotally, I have a few friends that toggle VPNs on and off when they encounter "network weather" in games. Personally, I'm a bit skeptical they're truly so often mitigating problems by toggling a VPN (instead of, say, just waiting a couple minutes), but hey, they swear by it.
Super cool. I always enjoy reading about systems that challenge, well, "ossified" assumptions. An OS not providing a shell, for example? Madness! ... or is it genius, if the OS has a specific purpose...? It's thought-provoking, if nothing else.
I'm a bit skeptical of parts. For instance, the "init" binary being less than 400 lines of golang - wow! And sure, main.go [1] is less than 400 lines and very readable. Then you squint at the list of imported packages, or look to the left at the directory list and realize main.go isn't nearly the entire init binary.
That `talosctl list` invocation [2] didn't escape my notice either. Sure, the base OS may have only a handful of binaries - how many of those traditional utilities have been stuffed into the API server? Not that I disagree with the approach! I think every company eventually replaces direct shell access with a daemon like this. It's just that "binary footprint" can get a bit funny if you have a really sophisticated API server sitting somewhere.
Exactly this. I was thinking of making a similar comment but you made it far better than I could.
Number of binaries is kind of a meaningless metric, especially for a system that historically follows the UNIX philosphy of each program doing one thing.
Sure, a shell is complicated and a potential risk, and perhaps it's a good idea to exclude from the base system in this context.
But I'd rather have ls, tr and wc on my system than some bespoke, all-encompassing API service that has been far less battle tested providing similar functionality.
And like you rightly pointed out, these new binaries all contain their own list of dependencies which are pulled in at build time and need to be taken into scope as well.
That's not to say Talos or its approach doesn't hold merit, but I think it's a little disengenious to simply point at the number of binaries.
I agree number of binaries is an arbitrary metric but also an indicator that things work differently with Talos. You have to use the declarative API for management which some people could see as a bad thing.
I’d also like to point out that the system API is designed to be extendable and adaptable to different operating systems. We’d love for more vendors to create adapters/shims to get the benefits of API managed Linux
It has indeed been a strange time for Google SRE recently. However, they're definitely not planning on shutting down SRE - at least, if you can trust what Google leadership's actual explanation of what that meant.
Supposedly, the ratio of SRE to product eng had been growing slowly over the years. The KR to "readjust" that ratio was to bring it back in line with historical norms, i.e., to ensure that SRE continued to scale sub-linearly with SWE/systems. This had (primarily) two facets.
First, it gave SRE teams an effectively-blank check to reevaluate their existing dev engagements and jettison the ones that weren't working well.
Second, it pushed to eliminate old tools/systems/platforms and converge onto the more modern stuff, like Annealing [1]. Fewer crufty platforms means fewer teams needed to run them, and improvements in those platforms have broad impact.
Anecdotally, my own sub-org (within SRE) is growing at the moment. Not by a huge amount, but growing nonetheless.