> but if one frame took 0.1s and the others took the rest, then for users it'll feel like the game plays at 10fps at that point
Wouldn't it feel like 10fps for 0.1s only? I agree it's a good thing to measure, I think it's called "stutter" usually, but I'm not sure you can say "it feels like 10 fps" since its for such a small moment.
Yes, you are right. The word I was looking for is smoothness: the game won't feel like a stable 10fps, but it would feel as _smooth_ as a 10fps game, or even worse actually, because it's less predictable
> I doubt many people will honestly admit they did no design, testing and that they believe the code is sub par
I'd doubt any engineer that doesn't call most of their own code subpar after a week or two after looking back. "Hacking" also famously involves little design or (automated) testing too, so sharing something like that doesn't mean much, unless you're trying to launch a business, but I see no evidence of that for this project.
> I understand that deep learning is accelerated by GPUs but the concept of a transformer could have been used on much slower hardware much earlier
But they don't give the same results at those smaller scales. People imagined, but no one could have put into practice because the hardware wasn't there yet. Simplified, LLMs is basically Transformers with the additional idea of "and a shitton of data to learn from", and for making training feasible with that amount of data, you do need some capable hardware.
Sounds like maybe using worse quantization on the bigger model? Quantization matters a lot for the quality, basically anything below Q8 is borderline unusable. If it isn't specified in a benchmark already it probably should.
Yeah, I don't get that either. I'd also want a binary "you're an adult" vs "you're a child" then we decide what belongs where, and the age is the same for everything. So once you're X, you get to fuck, drink, drive, die in wars, take loans, use social media, watch porn and whatever else we've added age limits to.
The same person who've mercilessly lied about safety is still running the company, so not sure why anyone would expect any different from them moving forward. Previous example:
> In 2023, the company was preparing to release its GPT-4 Turbo model. As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon. But when she asked Kwon, over Slack, he replied, “ugh . . . confused where sam got that impression.”
Somehow, in the AI world, "local-first" means a local harness talking to a remote model, almost never "local harness talking to local model". But then "open source model" apparently also means "you can download the weights if you agree to our license" and almost never "you can see, understand and iterate on what we did", so the definitions already drifted a lot between the two ecosystems.
I'm not sure I understand the question. Regardless of what provider you choose - be it cloud based or local - you have to provide setup information such as host, authentication, etc. So it "defaults" to nothing; you have to select something.
Local first means running Atomic with local models is not an afterthought. It’s a first class citizen that works just as seamlessly as running with a cloud provider - assuming you’ve done the work to provision the local models and their connections yourself.
So, given that this is almost brought up every single time a GUI framework/library is posted on HN, and has been for decades at this point, and given that it apparently doesn't make any difference, what's a better approach for educating the ecosystem at large about this problem? Do we need a website? Nation outreach program? Do we need a arewescreenshottingyet.com landing page with stats to keep track? How can we finally fix this constantly persisting problem of people not adding screenshots for visual things they've built?!
Two options: continue to spend the effort to teach new cohorts community expectations; or eliminate the production of new cohorts so the existing lessons eventually saturate the population.
I have nothing against Instagram asking me if I am over 16, but these laws end up with my OS not allowing requests to instagram unless I prove to it that I am over 16 with a photo ID is where we're going.
Sounds like the situation might end up with Instagram not accepting requests unless you're using an OS that follows those sorts of laws, which is kind of an inversion of what you said, and I think I'm fine with that outcome if so be it. Websites should be allowed to decide who's visiting them, unless they're government, utility or other basic needs portals.
Fair, maybe. That'd be the better case I suppose. However that be more like banking apps not liking rooted phones. The California law is more like your OS not allowing you to access resources unless you prove your age, not the external resource doing so.
> Websites should be allowed to decide who's visiting them
No, hold up, you just casually introduced a dystopian goal of facilitating the casual collection of government ID by website operators. I absolutely do not want the equivalent of South Korean ID numbers in order to do pretty much anything online.
Anyway as I always point out when these threads come up we've yet to try the simple and noninvasive solution. Websites should be required to send a content categorization header. Large enterprises that fail to do so should be fined. If that were uniformly happening it would then be possible to do proper client side filtering (right now that fails miserably).
Before anyone asks, app stores could be required to implement the equivalent of the header in an appropriate manner of their own design.
Wouldn't it feel like 10fps for 0.1s only? I agree it's a good thing to measure, I think it's called "stutter" usually, but I'm not sure you can say "it feels like 10 fps" since its for such a small moment.
reply