For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more Thristle's commentsregister

No lockdowns since around the 2nd dose (pre boosters) if I'm not mistaken

I believe tourism is blocked for a couple of months? Citizens can still fly out to almost all destinations and return with no/minimal/2 week quarantine

Statistics say about 1 out of 5 infected was found positive in the airport PCR test.


It really depends on the country, trend of number of corona cases and progress of vaccinations

if you are in the US, we already start to see some of the big corps (not only FAANG) start to talk about return to office and sanctions vs non vaccinated workers (probably "prep work" for return to office)

optimistic POV - more and more companies will open their offices on non-mandatory basis in H1. Every new variant and every new case in some office will delays/pause this but it will happen


Conversely, I was just visiting one of the biggest fintechs (so not California culture), and after Omicron hit, they nixed their return to work plans. We were almost the only people on the entire floor.


How much is small? In terms of cpu cores and memmory

Cost in cloud is all about capacity and each cloud has very good tools to see where the money is going. It sounds like either what you consider small is not that or money is being wasted somewhere on things you are not aware of


I guess “small” is open for interpretation.

Let’s say 720vCPU for the cluster and about 2.3TiB of memory.

These would be quite small nodes if you bought them as machines in a datacenter. Most modern machines have 40+vCPU and 128G+ RAM each.

I have 36machines in my GKE nodepool with 20vCPU and 64G of ram. So the aggregate totals sound high but it’s not many. In terms of real machines I could have fewer, like 18 or so.


that compute is much larger then startups i worked for that make millions of dollars/50+ M unique users per month. so obviously not "corporate" sized but definitely not small

Looking at the GKE price calc - N2 machines (which i have no idea if they are cheapest per vcpu/mem) * 7 will give you 896 vcpu and 3584 GB ram. that will cost you 21K per month for a zonal cluster

we can do napkin calculations but that won't help you. if you want to get your bill down you just need to open the billing reports and start slicing data by usage

Edit - i really hope i am not coming off as condescending or anything. I used to work in a startup related to cloud cost optimization and currently as a devops in a cloud env so i know how these costs can get out of hand.


I have definitely looked into getting the costs down, we’re not doing anything truly special.

Making use of best practice costs a lot more than most people expect, interzonal networking is charged, for example; a lot of people also assume that redundancies are built in to things like RDS or CloudSQL, but they’re not, and you should be having replicas.

And of course traffic to databases is interzonal networking.

It adds up, not a lot can be done In Many cases.


My SO just started a PHD in chem eng, before she decided we looked for some open positions and found nothing/low paying (40-50K a year) jobs for grads.

I guess everything IS bigger in the USA


Obsolete? RSS is the only way i can get a list of new youtube videos from my subscribed channels.

Go over your feed? useless!

go over my subscribed feed till i get back to last video i saw? way too much work and that feed was known to fail (just like clicking the bell)

the only working solution is RSS feed for every channel i want to follow and automation to add those to my different playlists


i started moving a lot of my youtube subs to rss recently as well. i was subbed to 190 channels on youtube but somehow im supposed to be able to keep up to speed with them all using only a single view where they are all mixed together


The value for money on those is insane


In the past i used brew to install EVERYTHING, this lead to situations like in OP of DBs, compilers, interpreters and other things that i wished stay frozen to get updated by surprise

Since then i moved to manage everything that needs to stay frozen by either a versioned formula/cask (python@3.9 and so on) or use brew to install a version manager (asdf,pyenv,pipx) and then use that. Results are much better once I did that

Although versioned casks/formula exists you will still get minor versions - python@3.9 will update from 3.9.0 to 3.9.1 is that ever exists. so while completely protecting you from upgrades it does minimize the the risk

Also, there is really no need to install virtualenv directly from brew. A very bad practice. Of course doesn't change the fact that brew did things that are unrelated to what you asked of it


brew will auto update on install by default.

why are you being this hostile while being completely wrong?


Don't worry. Preemtible VMs were used by enough people and this pricing model is 100% profit for the clouds

This is just aligning the offering with Azure and AWS


1. You only pay for compute time you actaully get 2. Any stateless process such as web servers are ideal for spots 3. Also k8s nodes are very good to run in spot since pods are "natively" crash "resistant"


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You