Have you configured your tunables with powertop, set amd_pstate = active (and/or set up TLP)? If not, give that a try, it's a game changer.
Also by all day I meant working day (8+ hours), which is good enough for me to take my laptop off-site and work without a battery. Still falls a good bit short from the Apple Silicon MacBook or course, can't really compete with that until we get a decent Linux-native ARM notebook (unless you count Chromebooks).
Web performance is probably/mostly valued as efficiently as it needs to be.
The numbers mentioned in the article are...quite egregious.
> Oh, Just 2.4 Megabytes. Out of a chonky 4 MB payload. Assuming they could rebuild the site to hit Alex Russell's target of 450 KB, that's conservatively $435,000,000 per year. Not too bad. And this is likely a profound underestimation of the real gain
This is not a "profound underestimation." Not by several orders of magnitude. Kroger is not going save anywhere even remotely close to $435 million dollars by reducing their js bundle size.
Kroger had $3.6-$3.8 billion in allocated capex in the year of 2024. There is no shot javascript bundle size is ~9% of their *total* allocated capex.
I work with a number of companies of similar size and their entire cloud spend isn't $435,000,000 -- and bandwidth (or even networking all up) isn't in their time 10 line items.
it's not just their direct cost, it's also the loss of revenue. the author wasn't arguing that they could save 435 million dollars in server costs.
Instead they were arguing that in addition to saving maybe a million or two in server costs, they would gain an additional 435 million dollars in revenue because less people would leave their website
This article seems to focus on the shortcomings of LLMs being wrong, but fails to consider the value LLMs provide and just how large of an opportunity that value presents.
If you look at any company on earth, especially large ones, they all share the same line item as their biggest expense: labor. Any technology that can reduce that cost represents an overwhelmingly huge opportunity.
America was under a fascist ruler, but not under a fascist system of government.
Trump tested American democracy by consolidating power and was not successful, so we avoided being under a fascist rule
The fear is that we might get to test democracy again, and most of America doesn't seem to mind that. Maybe it's due to lack of understanding, not caring, or genuinely wanting fascism, I don't know.
Neon seems really great to me, but I wish I could easily run it locally via Kubernetes. I know there are some projects out there[0] but they are all supported by 3rd parties and things like branching and other features don't appear to be supported.
I'd love to be able to just use a helm chart to have Neon in my homelab.
And if you're pairing your infra-as-code with a gitops model then you can help prevent these kinds of issues with PRs.
You can also use your git history to restore the infrastructure itself. You may lose some data, but it's also possible to have destroyed resources retain their data or backup before destroy.
The problem with infra-as-code and gitops is that it's often nearly impossible to tell what will actually happen with a PR without running it somewhere. Which is 1. expensive. and 2. nearly impossible to get to mirror production.
Production and staging are the farthest you can get from pure immutable environment that you can get. They carry state around all over the place. It's their entire reason for existing in some sense.
This means that while git-ops can be helpful in some ways it can also be incredibly dangerous in others. I'm not entirely sure it doesn't all come out in the wash in the end.
GitOps is just like "DevOps" -- you don't really know what it means to a specific org until you talk to them, because people interpret it differently based on their own understanding (or if they have a horse in this race).
To me it always means describing the desired state of your infra in structured data, storing that in git, and run controller to reconcile it against the actual infra.
If your GitOps engine has to compile/run the "code" to uncover the desired state, that defeats the purpose of GitOps and is no better than running your hand crafted release bash script in a CI/CD pipeline.
It should have never been called infra-as-code, but infra-as-data.
This does not change my statement at all though? You fundamentally can't really predict the impact of some changes in a given environment until it's deployed. Just because you can obtain the current state of the environment and reconcile some stuff doesn't change this.
That's why you should call what you store in git the _desired_ state, not anything else. A git repository is not a live database. It's a collection of static text files that change less often than your live system. There will be bugs and misconfiguration, and sometimes the desired state is just technically not reachable, and that's fine. What the actual state is doesn't matter. Leave that to the controller. State drifting is a problem your gitops engine should detect, and should be fixed by the owner of controller code.
Some companies practice infra-as-code, point to their git repo and tell me "this is our single source of truth" of our infrastructure. And I have to tell them that statement is wrong.
This is correct. You need some kind of running check on the environment and when possible code that handle exceptional cases.
Sometimes that's as simple as a service that shoots other services in the head to restart them. Othertimes it's more complicated. But lot's of places can't afford to get more complicated than "alert a human and have them look at it".
> Probably even better is to ship a controller and a CRD for the config.
But how do you package the controller + CRD? The two leading choices are `kubectl apply -f` on a url or Helm and as soon as you need any customization to the controller itself you end up needing a tool like helm.
Agree. I'd recommend to start with static YAML though. Use kustomize for the very few customisations required for, say, different environments. Keep them to a minimum - there's no reason for a controller's deployment to vary too much - they're usually deployed once per cluster.
> You don’t need to use NAT. Which means you have to set up a firewall on the router correctly. Default-deny, while still allowing ALL ICMP traffic through, as ICMP is kinda vital for IPv6 because it’s used to communicate error conditions.
I do think using NAT in the form of NPTv6 is awesome for home use because it allows you to have a consistent address regardless of your ISP prefix assignment.
Think of NPTv6 as a kind of "stateless NAT" where the prefix is mapped 1:1 to your internal prefix. This means if your ISP changes your address, you only need to your external DNS versus all of your devices.
I do absolutely love the z13 and prefer it most of the time...but I definitely would not call the battery life "all day" or even "almost all day"