For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more benaadams's commentsregister

They are, they just aren't buying them back on coinbase. Tether don't manage the secondary market (cexes and dexes)

Though the minimum redeemable amount is $100k https://tether.to/en/fees

Why they are depegging is people are selling for under a dollar on coinbase (perhaps in panic); and Alameda Research (re: FTX exchange) who normally makes a ton of money doing the arbitrage between dollars and tether is having liquidity problems, so not doing it currrently.


Visual Studio 2022 is 64-bit


> "One last note on the data load process: At the time of publishing this blog post, all phone numbers beginning with international codes 4, 6, 8 and 9 have completed loading. The other codes are in progress and may take several hours more before they're searchable."

https://twitter.com/troyhunt/status/1379366099544797189


Thanks, because the end of the blog post mentions 8 instead of 9:

> At the time of publishing this blog post, all phone numbers beginning with international codes 4, 6, 8 and 8 have completed loading. The other codes are in progress and may take several hours more before they're searchable.

So I was like: what about another 8?

Edit: Actually, it is "4, 6, 7 and 8"! cf. https://twitter.com/troyhunt/status/1379377818618884098


.NET 5.0 can now create true single file that doesn't need any unpacking on first run (except on Windows where the Jit and GC dlls will live or be unpacked separately; but that should be resolved in .NET 6.0)


Anything in particular? .NET 5.0 (and therefore C#) runs beautifully on Windows, Mac and Linux (x64 or Arm)


Specify the platform and that you want it precompiled for that platoform?

https://docs.microsoft.com/en-us/dotnet/core/whats-new/dotne...

    <RuntimeIdentifier>linux-x64</RuntimeIdentifier>
    <PublishReadyToRun>true</PublishReadyToRun>

It will still Jit after 30 calls to refine further; but should start at a higher level of optimization.


I assume the 200PB of storage and 60Gbps egress bandwidth 24/365 they do would be _extremely_ pricy on AWS...


On the scale of big hosting operations, 60Gbps outbound is not that much. If you're buying full tables IP transit from major carriers at IX points, I've seen 10GbE for $700-900/mo, and 100GbE circuits for under $7k/month. Of course you wouldn't want to have just one transit, but I'm fairly sure if somebody said to me 'here's $20,000 a month to buy transit', on the west coast, it's within the realm of possible.

Ideally of course they should be able to meet a fairly wide number of downstream eyeball ISPs at the major IX points in the bay area and offload a lot of traffic with settlement-free peering.

60Gbps outbound from AWS, Azure or GCP would be astronomically incredibly expensive.


It seems that companies can be too big for the cloud and too small for the cloud (don't need k8s). I wonder what's the sweet spot.


horizontal and vertical scaling are the latest push, but diagonal would be the sweet spot


Exactly this. There are some logistical complexities (e.g. some of our bandwidth is funded by the E-Rate Universal Service Program for libraries which runs on a July-June fiscal year and so rapid upgrades on that front aren't possible), but by and large egress bandwidth isn't our primary challenge. Intersite links as I noted in the video are the current big one, and that can and does involve occasional time-consuming construction -- but honestly over the past year, a combination of total blowout of my usual capacity planning (including equipment budgets) plus the logistical complexities of lockdown have resulted in slowness to upgrade as fast as we'd like to.


Two words: hurricane electric.


God bless HE Fremont. They are the unsung story of the Internet backbone. If one were to make a list of companies that at some point had a major fraction of their hosted physical infrastructure at HE I suspect it would make people's jaws drop.

It's been a huge blessing for a whole generation of startups to have a radically well-connected space that just about anyone can drop in equipment at multi-gig unmetered (albeit admittedly extremely constrained on power and cooling). It is honestly a part of what has made Silicon Valley great. Well, that and being able to cobble together a few replacement servers from Fry's components (RIP!) or schlep out to some ex-CTO's Sunnyvale garage in the middle of the night to offload some lightly used VA Linux 1U's...

Even today, you can get 15A and a 42U cabinet to call your own with unmetered gigabit for $400/mo - and probably less if you ask nicely.


IME with cloud in the small and in the large: network prices are artificially high on the cloud providers and are very easy to get discounted if you are a big spender.


So the small pay for the large?


Yes, that's typically how it works for any industry. You get discount on bulk orders.


It seems they are regularly maxing out their network infrastructure. If it's so cheap, how come they don't just buy more? Is it the cost of the actual hardware? (I know they recently upgraded)


We are upgrading again-- the pandemic has made us kind-of popular.

Because of budget we tend to use things up 100% before putting more in.


They are maxing out the fiber links between their own datacenters, which is in the process of being addressed. If the bits can't get from the datacenter full of hard drives to the datacenter that connects to the internet, not much point in buying additional transit capacity.


I have no clue how people can afford what AWS charges for bandwidth. I did the math once for migrating a project to AWS and the bandwidth alone costed 10x my entire current infrastructure for that project, which is something I run for free.


Because people have nothing to compare their AWS cost to. They don't know how much it would cost them to host their service outside of AWS.

And it is not only a cost comparison. You need different kind of people to manage in-house vs cloud, not that you need less or more, just different skills


Absolutely - AWS is this generation's IBM, no one was ever fired for buying services from them, so to speak.


Not only that: It is really hard to predict AWS cost. So many variables go in. And starting with a small side project in AWS is easy, and then each additional step is a small step ...


For lots of orgs I’ve seen it creeps up slowly until you’re paying 10-50x of full transit without any peers but by that point you’re too locked in to do anything


Depends on whether bandwidth is important to what you're doing. In many applications it isn't so even at the inflated prices charged by AWS et al don't really matter in the context of other expenses.


To be fair here, when you're pouring that much money into AWS you probably have a better contract and can negotiate the price down quite a bit. Additionally, you could use CloudFront to further reduce your bandwidth costs.

That's not to say that it wouldn't be incredibly expensive, but probably far less than what you see on the pricing page.


A hybrid solution might be possible. Put your core infra on AWS, but get a very cheap CDN (or custom solution) in front of it to handle the 60Gbps so that only a small fraction will hit your AWS infra. Do the same for storage, e.g. build your own ceph cluster on bare-metal instead of Amazon S3.


USD is required to pay US taxes; that gives it a baseline reference point.


The rise of neoconservatism


Stacktrace improvements in .NET Core 2.1: Intelligible stack traces for async, iterators and Dictionary (key not found)

https://www.ageofascent.com/2018/01/26/stack-trace-for-excep...


Awesome! Thank you. Wonderful to see these open source contributions to .NET.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You