For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more Sharlin's commentsregister

It's the night-side Earth, taken at a high ISO value to keep shutter speed fast to prevent blur.


Ok thank you, makes more sense, I thought it was the day-side


Yes, I was also confused when I first saw it – how could the aurora be visible?! The bright sliver of atmosphere in the lower right is, of course, backlit by the sun which is itself eclipsed by Earth. It's the near-full moon that provides most of the illumination here. Besides both auroral rings you can also see airglow, city lights, and lightning flashes, it's a marvellous photo.


I was confused when I first saw this photo, as I don't think I've ever before seen a nightside, moonlit Earth, exposed so that it looks like the dayside at a first glance. I wonder how many casual viewers actually realize it's the night side. A nice demonstration of how moonlight is pretty much exactly like sunlight, just much much dimmer. In particular it has the same color, even though moonlight is often thought of as bluish and sunlight as yellowish!


I've done several shoots lit only by the full moon. Doing long exposure, the images are as you stated not much different than an image taken during the day, except for looking at the sky and seeing stars.

I've also done video shoots with the newer mirrorless cameras and fast lenses shooting wide open again lit with nothing but the full moon. It again looks daylight on the image. As a bit of BTS, I recorded a video of the screen on the camera showing what it was seeing, and then pulled away and reframed to show essentially the same shot as the camera but it's just solid black. One of those videos was fun as we caught a bit of lens flaring from the moon, and you can actually see the details of the surface of the moon in the reflection. It was one of those things I just never considered before as flares coming from lights or the sun are just void of detail.


Thanks. Until you pointed out it's Earth at night, I had no clue what was supposed to be special about this photo (it appeared suspiciously pixelated for something 'high res', and neighbouring pixels seemed to contrast in colour rather than smoothly complementing as most photos do - but I guess that's random patches of city lights being captured by the camera). Cool stuff!

Something I haven't figured out is: what is that yellow/whitish smudge toward the center of the earth? It looks like camera glare or a reflection?


Yeah, it's a reflection from the window, of something inside the ship.


It’s a remarkable photo. You can see the aurora Australis at the top right of the image (it’s upside down, if there is such a thing - that’s the straits of Gibraltar at the lower left, and the Sahara above it - and the skein of atmosphere wrapping the entire planet. Those look like noctilucent clouds in the north, or possibly more aurora.


It really is gorgeous. You can see both auroral rings, then there's airglow, and city lights around Gibraltar and on the South American coast, and lightning flashes in the storm clouds over the tropics.


It explains why the image is so grainy. At first I was confused what that stripe to the left and the bottom was. But it’s just the window edge, and the noise isn’t stars.


(To be clear, the bright dots are stars [except the brightest one, in the lower right, is Venus I think], which makes this photo also a great demonstration that of course you can capture stars in space, you just have to expose properly!)


Who said you can't capture stars in space? What do you think the purpose of Hubble, JWST, etc are? There's also plenty of imagery taken from ISS that clearly show stars. I've definitely seen Orion in some of that imagery and it put a different perspective on the size of the constellations when seen from that angle.


I referred to the common question (or accusation) of why there are no stars in, say, the Apollo photos taken on the moon. The answer is, of course, that you can't see stars if you're exposing for something bright and sunlit, like the day side of Earth, or the lunar surface.


Of course. But they are not visible in the Chang’e photos on the dark side either. I think in the interview of the astronauts following the first Apollo Mission, a reporter asked for a confirmation that the stars were not visible because of “the glare” (an interesting question in itself). The explanation given was that the stars were not visible with the eye, but were visible with “the optics“.


Photos from the moon landings don't have stars in them, because they are exposed for full daylight on the moon.


I’m assuming the people who complain that there aren’t stars are the “moon landing faked” crowd… it’s hilarious to me that they think this vast conspiracy came together to fake that whole thing, and that they literally forgot to put a bunch of tiny 25-cent flashlight bulbs up poking through the black backdrop on the sound stage. Like, no one thought about the stars, or they couldn’t figure out how to do those “special effects” and just prayed no one would spot the error.


Just answered my own question to my satisfaction; they are stars.

The same specs, which match star charts, show up in two images taken a few moments apart at different exposures (links were given down-thread).


How do you know that they're stars? I believe they probably are stars as well (by visual comparison with a star chart, suitably rotated), but I've found no source for either claim.

I did find multiple sources, including TFA, for the brightest being Venus.


They're much brighter than the noise floor. Photographic noise doesn't really have such outliers.


Why would you think they are not stars? Not really sure the confusion on the matter. Are we leaning towards this being shot from a soundstage?


Well one of them is obviously Venus. How did you determine the others weren't stars?


I’m talking about the grainy noise over all the black parts (actually over the Earth disk as well), including beyond the window edge. The window edge itself looks like a denser and brighter stripe of stars.

Zoom into this higher-resolution version: https://www.nasa.gov/wp-content/uploads/2026/04/art002e00019...


Yep, that's definitely noise.


What does it look like to human eyes? Is there enough light for a person up there to see colour, or would it look like black and white (like a moonlit scene on the ground).


No color, I’m pretty sure.


> even though moonlight is often thought of as bluish and sunlight as yellowish!

Is that... true? Sunlight is seen as yellow, of course, but the moon is usually thought of as white.


Sunlight is yellowish in atmosphere since some blue's been scattered by the atmosphere[1], but it's white in space.

[1] https://en.wikipedia.org/wiki/Rayleigh_scattering


I don't think that's right. Sunlight is white in the atmosphere too. Scattering causes the sun, not the light, to look yellow, and so sunlight is thought of as yellow.


Scattering doesn’t really make the sun to appear yellow except when it’s low, behind a lot of air. When it’s above 30° or so it just looks blinding, neutral white (or non-blinding neutral white if there’s suitable cloud cover or other filter in front of it). Even though a lot of the blues are scattered around, the sun still looks just white when it’s high in the sky.

But when the sun does look yellow, its light is yellow too, that’s the definition of "looks yellow". And the golden hour paints everything in very iconic yellow-orange hues. The light as integrated over the whole sky is still white (modulo whatever’s scattered back into space), but the light that comes from the direction of the sun is clearly tinted yellow and the light from the rest of the sky is clearly tinted blue.


> But when the sun does look yellow, its light is yellow too, that’s the definition of "looks yellow".

Not quite; the sun is far away and is restricted to a tiny portion of the sky, but its light covers half the earth at a time. It is simultaneously true that the sun looks yellow and that the light we receive from it is white. It isn't the case that objects in direct sunlight are yellowed by that light; the yellow appearance when you look at the sun is something of an illusion.

> Even though a lot of the blues are scattered around, the sun still looks just white when it’s high in the sky.

This isn't true.


That's fair, I was thinking of how night, or twilight, as a whole is associated with cool hues, but it's probably true that moonlight in itself is usually thought of as neutral white.


The camera is compensating for extremely low light, so you end up with something that looks closer to a daylight exposure


I know. Apparently this was shot at ISO 51,200.


Moonlight is reflected sunlight.


That's obvious. But the moon is so perfectly neutral gray that the reflected light is essentially the same color as the incident sunlight.


Reminds me of one of the best new comedy series in years, Very Important People, doing improvised spoof interviews:

https://www.tiktok.com/@veryimportantpeopleshow/video/731957...


Around 70% of security vulnerabilities are about memory safety and only exist because software is written in C and C++. Because most vulnerabilities are in newly written code, Google has found that simply starting writing new code in Rust (rather than trying to rewrite existing codebases) quickly brings the number of found vulnerabilities down drastically.


You can't just write Rust in a part of the codebase that's all C/C++. Tools for checking the newly written C/C++ code for issues will still be valuable for a very long time.


You actually can? A Rust-written function that exports a C ABI and calls C ABI functions interops just fine with C. Of course that's all unsafe (unless you're doing pure value-based programming and not calling any foreign code), so you don't get much of a safety gain at the single-function level.


If you're going to swap out one function in a chain of functions for a Rust version, you're destroying your codebase. You simply can't replace one tiny piece of code in a large codebase with a version in a different language. Doing so would be insane.


C ABI is not C++ ABI. People often write C/C++ but they're completely different languages. C++ is much higher level and modern. C++ is closer to Rust than it is to C.


I find this interesting.

Curl's Daniel Stenberg claimed during his NDC talk that vulnerabilities in this project are 8 years old on average.

I wonder where the disconnect comes from.


It comes from all his reporters being teenagers in developing countries with older models, and people using SOTA models who know how to qualify a potential vulnerability having much bigger fish to fry than curl. curl is a meaningful target, but it's in nobody's top tier.


And to a good approximation all real world Rust uses unsafe everywhere.

So we now have a new code base in an undefined language which still has memory bugs.

This is progress.


No, this is false. For Rust codebases that aren't doing high-peformance data structures, C interop, or bare-metal stuff, it's typical to write no unsafe code at all. I'm not sure who told you otherwise, but they have no idea what they're talking about.


It's the classic "misunderstanding" that UB or buggy unsafe code could in theory corrupt any part of your running application (which is technically true), and interpreting this to mean that any codebase with at least one instance of UB / buggy unsafe code (which is ~100% of codebases) is safety-wise equivalent to a codebase with zero safety check - as all the safety checks are obviously complete lies and therefore pointless time-wasters.

Which obviously isn't how it works in practice, just like how C doesn't delete all the files on your computer when your program contains any form of signed integer overflow, even though it technically could as that is totally allowed according to the language spec.


If you're talking about Rust codebases, I'm pretty sure that writing sound unsafe code is at least feasible. It's not easy, and it should be avoided if at all possible, but saying that 100% of those codebases are unsound is pessimistic.

One feasible approach is to use "storytelling" as described here: https://www.ralfj.de/blog/2026/03/13/inline-asm.html That's talking about inline assembly, but in principle any other unsafe feature could be similarly modeled.


It's not impossible, it is just highly unlikely that you'll never write a single safety-related bug - especially in nontrivial applications and in mixed C-plus-Rust codebases. For every single bug-free codebase there will be thousands containing undiscovered subtle-but-usually-harmless bugs.

After all, if humans were able to routinely write bug-free code, why even worry about unsoundness and UB in C? Surely having developers write safe C code would be easier than trying to get a massive ecosystem to adopt a completely new and not exactly trivial programming language?


Rust is not really "completely new" for a good C/C++ coder, it just cleans up the syntax a bit (for easier machine-parsing) and focuses on enforcing the guidelines you need to write safe code. This actually explains much of its success. The fact that this also makes it a nice enough high-level language for the Python/Ruby/JavaScript etc. crowd is a bit of a happy accident, not something that's inherent to it.


Our experiences are different.

Good developers only write unsafe rust when there is good reason to. There are a lot of bad developers that add unsafe anytime they don't understand a Rust error, and then don't take it out when that doesn't fix the problem (hopefully just a minority, but I've seen it).


The parent comments references real world data from Google: https://security.googleblog.com/2024/09/eliminating-memory-s...


How?


The joke is, more or less, you can reduce everyone into two piles. But that's almost assuredly wrong.

It's very very hard to have what most people would call "autistic" levels of rationality in discourse in this world. But if you hold yourself to high standards, you quickly compute the logical argument OP is making (people who were excited were gullible marks etc. etc.) and realize it's wrong in several different ways (happy to explicate if unclear).

This is, of course, very easy if you were A) excited and B) didn't think it'd come to pass. Also observing that A does not imply B and vice versa is the minimally sufficient observation to rule out OPs comment being rational*

* n.b. "rational" means something akin to "not affected by a psychoactive disorder" in everyday discourse. In philosophy / logic class, it means, the statements x conclusion are internally coherent. "The moon is made of cheese because it is yellow" is rational, "The moon is made of cheese because Teddy Roosevelt likes cheese" is irrational. "The moon is made of cheese because the Pope likes cheese" is rational with the implied premises "God controls all, and he loves the pope"


You have copyright to a commit authored by you. You (almost certainly) don't have copyright (nobody has) to a commit authored by Claude.


Where is there any legal precedent for that?

In some jurisdictions (e.g. the UK) the law is already clear that you own the copyright. In the US it is almost certain that you will be the author. The reports of cases saying otherwise I have been misreported - the courts found the AI could not own the copyright.


>Where is there any legal precedent for that?

Thaler v. Perlmutter: The D.C. Circuit Court affirmed in March 2025 that the Copyright Act requires works to be authored "in the first instance by a human being," a ruling the Supreme Court left intact by declining to hear the case in 2026.

And in the US constitution,

https://constitution.congress.gov/browse/article-1/section-8...

Authors and inventors, courts have ruled, means people. Only people. A monkey taking a selfie with your camera doesn't mean you own a copyright. An AI generating code with your computer is likewise, devoid of any copyright protection.


The Thaler ruling addresses a different point.

The ruling says that the LLM cannot be the author. It does not say that the human being using the LLM cannot be the author. The ruling was very clear that it did not address whether a human being was the copyright holder because Thaler waived that argument.

the position with a monkey using your camera is similar, and you may or may not hold the copyright depending on what you did - was it pure accident or did you set things up. Opinions on the well known case are mixed: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

Where wildlife photographers deliberately set up a shot to be triggered automatically (e.g. by a bird flying through the focus) they do hold the copyright.


Guidance on AI is unambiguous.

https://www.copyright.gov/ai/

AI generated code has no copyright. And if it DID somehow have copyright, it wouldn't be yours. It would belong to the code it was "trained" on. The code it algorithmically copied. You're trying to have your cake, and eat it too. You could maybe claim your prompts are copyrighted, but that's not what leaked. The AI generated code leaked.


The linked document labeled "Part 2: Copyrightability", section V. "Conclusions" states the following:

> the Copyright Office concludes that existing legal doctrines are adequate and appropriate to resolve questions of copyrightability. Copyright law has long adapted to new technology and can enable case-by- case determinations as to whether AI-generated outputs reflect sufficient human contribution to warrant copyright protection. As described above, in many circumstances these outputs will be copyrightable in whole or in part—where AI is used as a tool, and where a human has been able to determine the expressive elements they contain. Prompts alone, however, at this stage are unlikely to satisfy those requirements.

So the TL;DR basically implies pure slop within the current guidelines outlined in conclusions is NOT copyrightable. However collaboration with an AI copyrightability is determined on a case by case basis. I will preface this all with the standard IANAL, I could be wrong etc, but with the concluding language using "unlikely" copyrightable for slop it sounds less cut and dry than you imply.


That's typical of this site. I hand you a huge volume of evidence explaining why AI generated work cannot be copyrighted. You search for one scrap of text that seems to support your position even when it does not.

You have no idea how bad this leak is for Anthropic because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work, and it is fully RETROACTIVE. And what is part of this leak? undercover.ts. https://archive.is/S1bKY Where Claude is specifically instructed to HIDE DISCLOSURE of AI generated work.

That's grounds for the copyright office and courts to reject ANY copyright they MIGHT have had a right to. It is one of the WORST things they could have done with regard to copyright.

https://www.finnegan.com/en/insights/articles/when-registeri...


I merely read the PDF articles you linked, then posted, verbatim, the primary relevant section I could find therein. Nowhere does it say that works involving humans in collaboration with AI can't be copyrighted. The conclusions linked merely state that copyright claims involving AI will be decided on a case by case basis. They MAY reject your claim, they may not. This is all new territory so it will get ironed out in time, however I don't think we've reached full legal consensus on the topic, even when limiting our scope to just US copyright law.

I'm interpreting your most recent reply to me as an implication that I'm taking the conclusions you yourself linked out of context. I'm trying to give the benefit of the doubt here, but the 3 linked PDF documents aren't "a mountain of evidence" supporting your argument. Maybe I missed something in one of those documents (very possible), but the conclusions are not how you imply.

Whether or not a specific git commit message correctly sites Claude usage or not may further muddy the waters more than IP lawyers are comfortable with at this time (and therefore add inherent risk to current and future copyright claims of said works), but those waters were far from crystal clear in the first place.

Again, IANAL, but from my limited layman perspective it does not appear the copyright office plans to, at this moment in time, concisely reject AI collaborated works from copyright.

Your most recent link (Finnegan) is from an IP lawyer consortium that says it's better to include attribution and disclosure of AI to avoid current and future claim rejections. Sounds like basic cover-your-ass lawyer speak, but I could be wrong.

Full disclosure: I primarily use AI (or rather agentic teams) as N sets of new eyeballs on the current problem at hand, to help debug or bounce ideas off of, so I don't really have much skin in this particular game involving direct code contributions spit out by LLMs. Those that have any risk aversion, should probably proceed with caution. I just find the upending of copyright (and many other) norms by GenAI morbidly fascinating.


> because with the copyright office, you have a DUTY TO DISCLOSE any AI generated work,

I was not aware of that. WHo has that duty and when do they have it?


Currently, the US copyright application process has an AI disclosure requirement for the determination of applicability of submitted works for protections under US copyright law.

The copyright office still holds that human authorship is a core tenet of copyrightability, however, whether or not a submission meets the "de minimis" amount of AI-generated material to uphold a copyright claim is still being decided and refined by the courts and at the moment the distinction appears to fall on whether the AI was used "as a tool" or as "an author itself", with the former covered in certain cases and the latter not.

The registration process makes it clear that failure to disclose submissions in large contribution authored by contractor or ai can result in a rejection of copyright claim now or retroactive on discovery.


You do not apply for copyright. In the US you can, optionally, register a copyright. You do not have to, but it can increase how much you get if you go to court.

I do not know whether any other country even has copyright registration.

Your main point that this is something the courts (or new legislation) will decide is, of course, correct. I am inclined to think this is only a problem for people who are vibe coding. The moment a human contributes to the code that bit is definitely covered by copyright, and unless you can clearly separate out human and AI contributed bits saying the AI written bits are not covered is not going to make a practical difference.


My (limited) understanding was that without formal registration you cannot file any infringement suits against any works protected by said copyright. Then what's the point of the copyright other than getting to use that fancy 'c' superscript?


I think you're wrong but I may be wrong too, so if you have a link that talks about this formal registration requirement it would be great.


While copyright exists automatically upon creation, the Supreme Court ruled in 2019 that the registration certificate must be granted before you can initiate a lawsuit.

You do, as the developer. Let's circle back to the original comment that started this discussion:

https://news.ycombinator.com/item?id=47594044

That comment is spot on. Claude adding a co-author to a commit is documentation to put a clear line between code you wrote and code claude generated which does not qualify for copyright protection.

The damning thing about this leak is the inclusion of undercover.ts. That means Anthropic has now been caught red handed distributing a tool designed to circumvent copyright law.


can you tell me where exactly in the documents you link to it says that?


It's beyond obvious that a LLM cannot have copyright, any more than a cat or a rock can. The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law. As far as I can see, it depends on the extent of the user's creative effort in controlling the LLM's output.


It may be obvious to you, but it has lead to at least one protracted court case in the US: Thaler v. Perlmutter.

> The question is whether anyone has or if whatever content generated by a LLM simply does not constitute a work and is thus outside the entire copyright law.

Its is going to vary with copyright law. In the UK the question of computer generated works is addressed by copyright law and the answer is "the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken"

Its also not a simple case of LLM generated vs human authored. How much work did the human do? What creative input was there? How detailed were the prompts?

In jurisdictions where there are doubts about the question, I think code is a tricky one. If the argument that prompts are just instructions to generate code, therefore the code is not covered by copyright, then you could also argue that code is instructions to a compiler to generate code and the resulting binary is not covered by copyright.


The binary should be considered "derived work". Only the original copyright owner has the exclusive right to create or authorize derivative works. Means you are not allowed to compile code unless you have the license to do so. Right?


Yes, so is LLM generated code a derivative work of the prompts? Does it matter how detailed the prompts are? How much the code conforms to what is already written (e.g. writing tests)?

It looks like it will be decided on a case by case basis.

It will also differ between countries, so if you are distributing software internationally what will be a constraint on treating the code as not copyrightable.


> is LLM generated code a derivative work of the prompts?

Very good question I would think it is. You are just using a mechanical system to transform your prompt to something else, Right?

But, a distiguishing factor may be that:

1. Output of the LLM for the same prompt can vary

2. So you don't really have "control" over what the AI produces

3. Therefore you should not get a copyright to the output of the LLM because you had very little to say about how that transformation (from prompt to code) was made.


According to the law, if I use Claude to generate something, I hold the copyright granted Claude didn’t verbatim copy another project.


why wouldn't antroipic own it? they generated it?


It is not "beyond obvious" that a cat cannot have copyright, given the lawsuit about a monkey holding copyright [1], and the way PETA tried to used that case as precedent to establish that any animal can hold copyright.

[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...


Anthropic could at least make a compelling case for the copyright.

It becomes legally challenging with regards to ownership if I ever use work equipment for a personal project. If it later takes off they could very well try to claim ownership in its entirety simply because I ran a test once (yes, there's a while silicon valley season for it).

I don't know if they'd win, but Anthropic absolutely would be able to claim the creation of that code was done on their hardware. Obviously we aren't employees of theirs, though we are customers that very likely never read what we agreed to in a signup flow.


Using work equipment for a personal project only matters because you signed a contract giving all of your IP to your employer for anything you did with (or sometimes without) your employer's equipment.

Anthropic's user agreement does not have a similar agreement.


My point was that they could make a compelling case though, not that they would win.

I don't know of ant precedent where the code was literally generated on someone else's system. Its an open question whether that implies any legal right to the work and I could pretty easily see a court accepting the case.


Who owns the copyright for something not written by anybody, you ask? Is it the man who pays to have it written, or the owner of the machine that does the writing? But it is neither. Nobody owns the copyright because nobody has written it.


I think all you need to do is claim that your girlfriend is your laptop. /s


Your explanation was several years worth of math studies beyond what GP was asking.


Sure, if you make that clear in all of your marketing rather than lying your ass off and then trying the "lol we didn’t really mean it" defense.


I'd think the conclusion you should draw is not that "even the famous experiments were not valid, so nothing in psychology is" but rather "the validity of an experiment does not correlate with how famous it is".


A direct conclusion. The insight I'll draw from that is that academia gives voice to the results the current zeitgeist finds interesting and believable without properly verifying the evidence.

See also the replication crisis.


Famous experiments are not chosen by academia. They are chosen by non academics. What you usually find is academics being much more reserved and more critical of these then journalists, bloggers or random commenters on HN.


I don't know about "much more reserved"... Citation needed. In the absence of evidence otherwise I assume academics are just people.


Yes they are just people - people who are much close to the topic then random commenters on HN.

Frankly, you made up accusation of academics from nothing and without bothering to check what they generally say. You just made it up


What accusation did r-w make...?


I don't think academia runs fox news and cnn but I'll withhold judgement


s/voice/authority/


I guess my point is that I don't need to think for long before I find an example justifying why physics is a serious field.

What would be the equivalent of Newton's laws in psychology? Does such a thing exist? Or does the whole field just prove how complicated human beings are by being incapable of proving anything else (which in itself would be an interesting result, don't get me wrong)?


Physics is an exact, quantitative, natural science. Psychology is neither exact, quantitative (usually), nor a natural science. They are not comparable. But like many other fields of study that are not hard sciences, psychology can still be useful and valuable. (Note the "can". Given the replication crisis, how much of psychology actually is I cannot say.)


Sure, I agree with that.

But do we have an example of something that is provably valuable? I am genuinely interested: after reading this article, I realised that there is nothing I can attribute to psychology off the top of my head.

I mean, Milgram used to be one, but well...


Indeed, AFAIK neural networks have caused at least two AI winters before finally breaking through thanks to a few good new ideas and the fact that the needs of computer games incidentally led to the development of a big industry of specialized, programmable, high-performance dot product calculators.


Speaking of winters; there's a good article about Cyc, a successor to Automated Mathematician. Cyc was the last big project in symbolic AI: https://yuxi.ml/cyc


Makes sense, given that to birds, optimizing for weight is everything. But seeing that the ridiculously smart border collies have a comparatively low density of neurons, clearly there’s more to intelligence than that.


I don’t even know how you’d compare their intelligence, it’s so apples to oranges.. Most birds build nests so they have an advantage in tool use and that’s what gets them ahead in some tests. On the other hand, have anyone tried to train corvids to herd other birds/animals? I bet BCs will have an advantage there:)


I'm not trying to compare them, just noting an interesting thing in the diagram in the article :) Wrt BC cognition, one notable feat is that some of them are known to have learned the words for hundreds of different objects.


Most people don't know it, but birds actually are optimizing against rotational inertia far more than they're optimizing for mass.

Otherwise they would barely be able to eat or drink; their stomachs are far larger and can be far heavier than their brains.

Why would inertia need to be optimized? Think a little bit.


I've not spent significant time with border collies, but I'd say that if I had to rank, multiple species of corvids are smarter than german shepherds (a breed I'm more familiar with).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You