Why they are depegging is people are selling for under a dollar on coinbase (perhaps in panic); and Alameda Research (re: FTX exchange) who normally makes a ton of money doing the arbitrage between dollars and tether is having liquidity problems, so not doing it currrently.
> "One last note on the data load process: At the time of publishing this blog post, all phone numbers beginning with international codes 4, 6, 8 and 9 have completed loading. The other codes are in progress and may take several hours more before they're searchable."
Thanks, because the end of the blog post mentions 8 instead of 9:
> At the time of publishing this blog post, all phone numbers beginning with international codes 4, 6, 8 and 8 have completed loading. The other codes are in progress and may take several hours more before they're searchable.
.NET 5.0 can now create true single file that doesn't need any unpacking on first run (except on Windows where the Jit and GC dlls will live or be unpacked separately; but that should be resolved in .NET 6.0)
On the scale of big hosting operations, 60Gbps outbound is not that much. If you're buying full tables IP transit from major carriers at IX points, I've seen 10GbE for $700-900/mo, and 100GbE circuits for under $7k/month. Of course you wouldn't want to have just one transit, but I'm fairly sure if somebody said to me 'here's $20,000 a month to buy transit', on the west coast, it's within the realm of possible.
Ideally of course they should be able to meet a fairly wide number of downstream eyeball ISPs at the major IX points in the bay area and offload a lot of traffic with settlement-free peering.
60Gbps outbound from AWS, Azure or GCP would be astronomically incredibly expensive.
Exactly this. There are some logistical complexities (e.g. some of our bandwidth is funded by the E-Rate Universal Service Program for libraries which runs on a July-June fiscal year and so rapid upgrades on that front aren't possible), but by and large egress bandwidth isn't our primary challenge. Intersite links as I noted in the video are the current big one, and that can and does involve occasional time-consuming construction -- but honestly over the past year, a combination of total blowout of my usual capacity planning (including equipment budgets) plus the logistical complexities of lockdown have resulted in slowness to upgrade as fast as we'd like to.
God bless HE Fremont. They are the unsung story of the Internet backbone. If one were to make a list of companies that at some point had a major fraction of their hosted physical infrastructure at HE I suspect it would make people's jaws drop.
It's been a huge blessing for a whole generation of startups to have a radically well-connected space that just about anyone can drop in equipment at multi-gig unmetered (albeit admittedly extremely constrained on power and cooling). It is honestly a part of what has made Silicon Valley great. Well, that and being able to cobble together a few replacement servers from Fry's components (RIP!) or schlep out to some ex-CTO's Sunnyvale garage in the middle of the night to offload some lightly used VA Linux 1U's...
Even today, you can get 15A and a 42U cabinet to call your own with unmetered gigabit for $400/mo - and probably less if you ask nicely.
IME with cloud in the small and in the large: network prices are artificially high on the cloud providers and are very easy to get discounted if you are a big spender.
It seems they are regularly maxing out their network infrastructure. If it's so cheap, how come they don't just buy more? Is it the cost of the actual hardware? (I know they recently upgraded)
They are maxing out the fiber links between their own datacenters, which is in the process of being addressed. If the bits can't get from the datacenter full of hard drives to the datacenter that connects to the internet, not much point in buying additional transit capacity.
I have no clue how people can afford what AWS charges for bandwidth. I did the math once for migrating a project to AWS and the bandwidth alone costed 10x my entire current infrastructure for that project, which is something I run for free.
Because people have nothing to compare their AWS cost to. They don't know how much it would cost them to host their service outside of AWS.
And it is not only a cost comparison. You need different kind of people to manage in-house vs cloud, not that you need less or more, just different skills
Not only that: It is really hard to predict AWS cost. So many variables go in. And starting with a small side project in AWS is easy, and then each additional step is a small step ...
For lots of orgs I’ve seen it creeps up slowly until you’re paying 10-50x of full transit without any peers but by that point you’re too locked in to do anything
Depends on whether bandwidth is important to what you're doing. In many applications it isn't so even at the inflated prices charged by AWS et al don't really matter in the context of other expenses.
To be fair here, when you're pouring that much money into AWS you probably have a better contract and can negotiate the price down quite a bit. Additionally, you could use CloudFront to further reduce your bandwidth costs.
That's not to say that it wouldn't be incredibly expensive, but probably far less than what you see on the pricing page.
A hybrid solution might be possible. Put your core infra on AWS, but get a very cheap CDN (or custom solution) in front of it to handle the 60Gbps so that only a small fraction will hit your AWS infra. Do the same for storage, e.g. build your own ceph cluster on bare-metal instead of Amazon S3.
Though the minimum redeemable amount is $100k https://tether.to/en/fees
Why they are depegging is people are selling for under a dollar on coinbase (perhaps in panic); and Alameda Research (re: FTX exchange) who normally makes a ton of money doing the arbitrage between dollars and tether is having liquidity problems, so not doing it currrently.