I have to say the same, but even as someone who isn’t really “relied upon”. The best asset for my own independent well-being has always been knowing which way the wind blows and sailing in that direction.
I have to imagine if there was some internal Polymarket-esque platform I’d be a rich man... (facetiously speaking) Unfortunately knowing the way the wind blows is not as much of an asset to those in charge.
"Everything expanding at the same rate" sounds vaguely similar to the truth that what we feel as gravity (standing on earth) is us and everything around us accelerating upwards from the center of the gravity well - and what we feel as "pressure" on our feet is from the earth "holding us up" (in crude terms). So, it sounds crazy but it's not too distant from the truth.
It's just one-shot AI slop - literally, the prompt was 'make a web based version of [github url of this project]' and it spat this out. It appears to work fine.
I'll keep it up for a couple of months and then it'll be auto-deleted, no sense in keeping it around longer than that.
If you read the release, it explicitly does not include the likenesses of human actors. Only animated and illustrated characters are included. (Although, that does cover animated/illustrated versions of characters that are typically portrayed by human actors...)
This is almost certainly due to the photographic/human likenesses of actors being under an entirely separate license and royalty contract than pure IP from Disney.
The ping-ponging is certainly a Gen1 problem. (My Gen1 does this.) Gen1 was essentially an off-the-shelf Mobileye unit, and the performance was, as expected, not good.
Gen2 autonomy stack is completely unrelated to Gen1, and from what I hear is a completely different level of reliability.
(also - this presentation covered yet another, unrelated, gen3 autonomy stack, which shares none of the hardware or models with the existing gen2 stack, either.)
reply