We have a moderately complex set of services we deploy with some separation of application code and infrastructure. No application code that runs on VMs is deployed as part of the infrastructure IaC - that’s all loaded once the “empty” infra is in place. The grey area is around non-VM compute like Lambda and Step Functions, which can be a part of the infra templates.
The way these services work requires an initial set of code to create the resources, and while it would be possible to send a “no-op” payload for the infrastructure deployment and then update it with real application code later, that seems pedantic (to us).
Maybe someday that changes, but for now it isn’t at all burdensome and we’ve been very successful with this approach.
Yeah, we've solidified that the day-0 path of deploying an empty image or no-op code deploy when first provisioning a service is the way to go and then letting CI/CD pick up the actual deployments longterm. I can see the "this seems pedantic" POV, but this is what we've found works across a number of cloud native services and accomplishes the end goal of managing infra with IaC and deploying with whatever tool we want for the application layer.
We have a similar array of deployment targets and the method is context dependent. The Kubernetes declarative manifests and reconciliation loop for applications is winning out, for our devs and industry at large. The cloud funcs / lambda are an annoying corner case, we do that with a late step in CI/CD currently, with a move to a dedicated Argo setup just for CD (workflows, not CD because that only does helm well)
Run it over WireGuard? I have this setup — cloud hosted private DNS protected by NOISE/ChaCha20. Only my devices can use it, because only they are configured as peers.
Yes — talking and hearing/reading about it. I don’t fault folks for being excited when first getting into ut, but it’s rare to hear anything new said. And what is new is increasingly niche and unlikely to have any application to what I do.
If you read the history you’ll see the appropriate word is “restarted” the EV revolution. It was on and off again in a slow march to the point that allowed Tesla to exist. I’m not diminishing the role Tesla played, but it has to be taken in context. They stood on shoulders.
I think looking at every carmaker’s lineup should make it obvious that they don’t give a crap what powers a car, they are just trying to sell what’s popular. EVs were trendy for a couple years and a margin-subsidizing $7000 was available so everybody enthusiastically brought out EVs. Now they’re less popular so they’re all pulling back. Arguably even Tesla is doing so, given that Musk has intimidated that he didn’t really think Tesla was going to keep selling cars forever.
When the demand is sufficient, the cars will be sold in numbers to match it. Demand will increase as it becomes practical to own an EV for more people. This mainly has to do with charging infrastructure at every level, which is capital intensive for both individuals and governments.
The statement doesn’t claim any fact: it’s a hypotheical not unlike a “based on real events” movie/book/etc that never quotes or attributes specific actions to a subject.
And that’s why Atlassian is very likely to lose over and over as they appeal (but never say never these days in the US).
Was the CEO dialing in from the headquarters of an NBA team they owned? Yes.
Were they calling to aggressively dismiss employee claims (without video I cannot prove "yelling", but that is a way that word is used in common parlance)? Yes.
Does downleveling employees have a significant negative impact on their careers? Yes.
This was one of the projects students did when I helped teach APCS to high schoolers as a TEALS volunteer (FracCalc).
Some of the implementations went way overboard and it was so much fun to watch and to play a part.
Even as a “seasoned” developer I learned some tidbits talking through the ways to do (and not do) certain parts. When to store input raw vs processed, etc.
That is just the archive part, if you just would finish reading the paragraph you would know that updates since 2026-03-16 23:55 UTC are "are fetched every 5 minutes and committed directly as individual Parquet files through an automated live pipeline, so the dataset stays current with the site itself."
So to get all the data you need to grab the archive and all the 5 minute update files.
That paragraph doesn’t make it clear (to me) that it’s a snapshot with incremental updates. If that’s what it is. Sorry if my obtuse read offended. I just figured it was edge cached HTML, and less likely it was actually broken.
The way these services work requires an initial set of code to create the resources, and while it would be possible to send a “no-op” payload for the infrastructure deployment and then update it with real application code later, that seems pedantic (to us).
Maybe someday that changes, but for now it isn’t at all burdensome and we’ve been very successful with this approach.
reply