For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | ls65536's commentsregister

~450 square feet, with how many feet in the third dimension? You probably had an order of magnitude more volume than 330 cubic feet there.

> You probably had an order of magnitude more volume than 330 cubic feet there

I’m 6’, so that’s the usable volume. (I’m not claustrophobic heighwise.)

I honestly don’t see an issue spending a couple days with folks I respect and admire in close quarters for ten days.


You don't get it. Your 400sqft apartment needs to be shrunk by a factor of 6 to have the same area as the Orion. Try living in an 8x8 foot square for a couple weeks.

That’s unfair as well - in space, more of the volume is usable. Perhaps equivalent to 2X 1G volume would be fairer.

> You don't get it

Have you ever been on a boat?


> Have you ever been on a boat?

What is a "boat"?


The boat you can step outside of, have a sky

> boat you can step outside of, have a sky

Not in a storm you can't! Granted I didn't do ten days. But I was with two other people for close to a week and it was...fine. We're old friends. There were moments it got annoying. But it was never boring or restrictive. We just played games, drank, looked out of the portholes, cursed hangovers and talked the one person who occasionally wanted to call it.


Yeah that sounds fun and closed spaces can be cozy too, at least especially when there isn't certain death outside

This is really fun!

A possible bug though: I managed to finish Inverness to Gibraltar, and the top three spots on the leaderboard somehow had negative time durations!


My intuition would be that constant usage (not exceeding maximum rated capacity/thermals/etc.) should generally result in less wear compared to the more frequent thermal cycling that you might expect from intermittent use, but maybe there's something else going on here too. I suppose this would depend on what exactly the cause of the failure is.

Either way, these are obviously being intentionally sold to be used for non-gaming-type workloads, so it wouldn't be a good argument to state that they're just being (ab)used beyond what they were inteded for...unless somehow they really are being pushed beyond design limits, but given the cost of these things I can't imagine anyone doing this willingly with a whole fleet of them.


Electromigration may be a factor


Electromigration decays exponentially with inverse temperature. If it's genuinely a factor, you're running that GPU way too hot.


But if everyone follows this advice, then everything just gets overwhelmed by "hustlers" (and their "shameless spam"), and collectively we're now all worse off because of it. It just turns into yet another tragedy of the commons situation.

I say this as someone who received a lot of great feedback and had some interesting interactions after posting about a project of mine using "Show HN" a few years ago. I didn't need to spam anything to get the attention, but I admit maybe I just got very lucky, or maybe there were just fewer posts to "compete" with at the time (this was before the recent write-everything-with-AI-and-launch-it-out-there craze).

Finally, I'm not making any moral judgments here, and if someone feels they need to do this to get the attention they want, then who am I to tell you otherwise. But we should be aware of what we're giving up when we overall tend to behave in such a way, even if it's the inevitable outcome.


Yes, this is why it's called a race to the bottom. If everyone does what is best for themselves then everyone's result will be worse.


The total size isn't what matters in this case but rather the total number of files/directories that need to be traversed (and their file sizes summed).


I responded here, it's essentially the same content: https://news.ycombinator.com/item?id=46150030


> I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.

I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.


What is a single /64 prefix not enough for?


Multiple local networks while still using SLAAC.


Separating out main, guest, work, internet-of-shit, security & VPN subnets


If that's really the case, I wish they would just come out and say it and spare the rest of us the burden of trying to debate such a decision on its technical merits. (Of course, I am aware that they owe me nothing here.)

Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?


Maybe the thought is that there will be more pressure now on getting all the tests to pass given the larger install base? It isn't a great way to push out software, but it's certainly a way to provide motivation. I'm personally more interested in whether the ultimate decision will be to leave these as the default coreutils implementation in the next Ubuntu LTS release version (26.04) or if they will switch back (and for what reason).


I can certainly understand it for something like sudo or for other tools where the attack surface is larger and certain security-critical interactions are happening, but in this case it really seems like a questionable tradeoff, where the benefits in this specific case are abstract (theoretically no more possibility of any memory-safety bugs) but the costs are very concrete (incompatibility issues; and possibly other, new, non-memory-safety bugs being introduced with new code).

EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.


I'd prefer it if all software was written in languages that made it as easy as possible to avoid bugs, including memory-safety bugs, regardless of whether it seems like it has a large attack surface or not.


I view `uutils` as a good opportunity to get rid of legacy baggage that might be used by just 0.03% of the community but has to sit there and it impedes certain feature adding or bug fixing.

F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.

Less code leads to less bugs.


> "sudo"

Hence "doas".

OpenBSD has a lot of new stuff throughout the codebase.

No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.


I'm not going to speculate about what might be ahead in regards to Oracle's forecasting of data center demand, but regarding the idea of efficiency gains leading to lower demand, don't you think something like Jevons paradox might apply here?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You