There’s benchmarks in the README. Python is ~10x faster. It’s heavily optimized. Based on the numbers and my experience with Flux.1, I’m guessing the Python run is JIT’d (or Flux.2 is faster), although it’d likely only be ~half as fast if it weren’t (i.e. definitely not 10x slower).
There are a lot of shortcomings in the current implementation, making it slow (but in my tree is 2x faster as we speak). For instance activations aren't taken in the GPU, kernels are not fused, flash attention is not used, and many other issues. Now I'll focus on that changes to approach PyTorch numbers a little bit more.
For a version of macOS that old, you’d probably want a dmg, which you can create with createinstallmedia if you have the Install macOS.app. Not sure if it’s supported with Lume as it’s the first time I’ve heard of it.
I’m just using their API tokens instead of Max. If usage gets out of control, then fine, I might need to look into an alternative. But I’ve grown very accustomed to the model and would rather not switch until I find a real need to.
KDE 4.0 - which introduced plasma - was released in 2006, and it was awful and wasn't supposed to be generally available (blame the distros and/or poor version naming). By version 4.5 (2010), KDE had stabilized. By the time Gnome 3 and Windows 8 were released in 2011/2012 respectively, KDE plasma was pleasant to use and rock-solid
It felt great to watch Gnome stumble after all the shit-talking, some schadenfreude was in order. I didn't care much for Windows 8; Vista was a the bigger mess of a release.
When an API returns JSON, your JS framework can decide what to do with it. If its returning HTML that's intended to go in a particular place on a page, the front-end has far less flexibility and pretty much has to put it in a specific place. Hence why they said endpoints can return 3-5 different versions of HTML.
> When an API returns JSON, your JS framework can decide what to do with it.
The JS framework is the frontend, so you're still coordinating.
> If its returning HTML that's intended to go in a particular place on a page, the front-end has far less flexibility and pretty much has to put it in a specific place.
Well yes, because presumably that's what the app is supposed to do. If it's not supposed to put it in that place, why would that be the specified target?
If this kind of static assignment of targets is not flexible enough for some reason, then use OOB updates which lets you replace fragments by id attribute. That lets you decouple some of these kinds of decisions.
Although "endpoints can return 3-5 different versions of HTML" is also a bit of a red flag that you're not using htmx correctly, generally endpoints should be returning 1, maybe 2 fragments in unusual cases.
In any case, you might find DataStar more to your liking, it's partway between React and htmx.
There is, Datastar has client-side rendering based on signals [1]. Datastar is also very explicitly designed to be modular and extensible, so you can extend the client-side with more features, as they've done with Web Components.
Signals aren't even really "rendering". And react and spas dont have a monopoly on doing things on the client - that's just javascript. I dare you to go to the datastar discord and tell them that they're React-adjacent, and SPA-like
"framework can decide what to do with it" sounds like a feature (more flexibility) but is actually often the source of errors and bugs.
A single, consistent, canonical response, generated by the server, taking into account all relevant state (which is stored on the server) is much cleaner. It's deterministic and therefore much more testable, predictable and easier to debug.
For pure UI-only logic (light/dark mode, etc) sure you can handle that entirely client-side, but my comment above applies to anything that reads or writes persistent data.
I never minimize windows. They are almost always full screen or split screen (unless I'm quickly grabbing a file in a Nautilus/Finder/Explorer window), and I just hide windows if I really need to. The same is true for macOS.
Forget what I do in Windows, been a couple years since I daily drove it.
I wonder if that's because I've used macOS and Gnome more than Windows for the last decade -- because its confusing as hell to cmd/alt-tab back to an app or click it in the dock and for its window to not appear because you minimized it rather than hiding it. When that happens, it usually takes 30 seconds until I realize the app is hiding in the task bar or dock.
YOU should be writing your commit messages, not an AI.
You can always generate a new commit message (or summary, alternative summary, etc) down the road with AI. You can never replace your mind being in the thick of a changeset.
The author of the commit doesn't matter per se. If someone is just having AI summarize their changes and using that as the commit message, I agree that they're doing it wrong.
These days, lots of my commit messages are drafted by AI after having chatted at length about the requirements. If the commit message is wrong or incomplete, I'll revise it by hand or maybe guide the AI in the right direction. That tends to be a much more useful and comprehensive description of the commit's intent than what I would naturally find worthwhile to write on my own.
OP's approach is interesting as well, at least in principle, and if it works well it might be the next best option in the absence of a chat log. It should just make sure to focus on extracting the "why" more than describing the "what".
> than what I would naturally find worthwhile to write on my own.
I take issue with that statement. There's nothing "natural" about documentation. You're not "naturally disposed" to writing a certain level of documentation. It's a skill and a discipline. If you don't think it's worthwhile to write documentation, that's not a "natural failing". You're making a judgment, and any missing documentation is an error in judgment.
I meant "natural" in a context of having more urgent immediate priorities than extensively detailed documentation at the commit level — not an error in judgement, just a tradeoff.
If a given project has time/budget to prioritize consistent rigorous documentation, of course it should consider doing so. AI's ability to reduce the cost of doing so is a good thing.
If we assume, as many do, that we are going to delegate the work of "understanding the code" to AI in the coming years, this surely becomes even more important.
AI writing code and commit messages becomes a loop divorced from human reasoning. Future AIs will need to read the commit history to understand the evolution of the code, and if they're reading poor summaries from other AIs it's just polluting the context window.
Commit messages are documentation for humans and machines.
Writing commit messages is one of these mundane chores I’d gladly delegate to LLMs which are very very good at this kind of thing.
I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.
Sounds like you haven't been working long enough to forget your decisions, which you WILL do eventually. In such cases, where you're looking at code you wrote 10 years ago and you find a weird line, when you view the git blame and read the commit message, you'll be very thankful that you explain not just "what" you did, but "why" you did this, something an AI will have a very hard time doing.
You don't have to if you don't want to, but if you think "this commit message is just a summary of the changes made", you'll never write a useful commit message.
I’ve been working in the industry for two decades, and I think commit messages is not the best place for storing decisions and associated context. I personally prefer ADRs.
Two decades and you don't see any value in writing down what's currently in your head?
Anyhow, ADRs are good, but they stand for Architectural decisions, not every decision is at that level.
In general, if there's a better place to store explanations, do use it, but often, in many projects, commit messages are the least bad place; and it's enormously better to write there than nowhere at all.
Hard disagree, though it’s probably dependent on your domain and/or how expressive your test suite is.
But even if that were true, reading a two liner explanation is very obviously more time efficient than reviewing a whole commit diff.
Super common case: you got a subtle bug causing unexpected values in data. You know from the db or logs that it started on 2025-03-02. You check the deployment(s)of that day and there are ~20 of them.
You can quickly read 20 lines in the log and have a good guess of which is likely to be related or go for a round of re-reviewing 20 multi file pull requests and reverse engineer the context from code.
"Super common case" is "initial commit", "fix spelling in README.md", or "small refactor". If your "super common case" is "subtle bug causing unexpected values in data" then you are doing something very, very wrong.
Perhaps this is about commit granularity. If keeping the history about advancing the task is not useful, then I’d merge these commits together before merging the PR; in some workflows this is set up to happen automatically too.
I agree in principle, but in practice, it's horrible right now.
Most AI generate commit messages and PR descriptions are much too verbose and contain 0 additional informational value that couldn't be parsed from the code directly. Most of the time I'd rather read 2 sentences written by a human than a wall of text with redundant information.
> I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.
I know the code...when I write it. But 2 weeks later all the context is gone, and that's just _for me_. For my colleagues who also have to be in that code, they don't even start with context.
I mean do what works for you, but understand the bulk of the work that this applies to is for >1 person shops with code bases too big to fit in ones head at all, much less for more than a day or so.
There's no point unless a critical mass of people use these tools. You will be the only one on your IP address using this configuration of masked fingerprinting, which is itself a fingerprint.
That's also why it's indeed useful when using Tor, because you're not identified by your base IP.
Unless we make this part of the culture, you have basically 0 recourse to browser fingerprinting except using Tor. Which can itself still be a useful fingerprint depending on the context.
EDIT: I'll add that using these tools outside of normal browsing use can be useful for obfuscating who's doing specific browsing, but it should be emphasized that using fingerprinting masking in isolation all the time is nearly as useful as not using them at all.
He was effectively years early to the “if you don’t like how twitter is run, build your own <smug face>” interesting how that argument isn’t used anymore.
As far as "obligatory xkcd" is concerned: 3154, 3155, 3159, 3160, 3162, 3165 and 3167 are all relevant. (I've found myself citing 3155 a lot, in attempts to deradicalise cranks: it sometimes works, if I can convince them to quit ChatGPT cold-turkey.)
It's fine to like the comics before around 2016, and dislike the ones afterwards, but there's nothing objective about that. Various people have put forward various thresholds for when xkcd "stopped being good", but ultimately it boils down to a combination of what TV Tropes would call "Tone Shift" and "They Changed It, Now It Sucks!".
A person used their relatively large platform to tell people that they don't support a crazy lunatic millionaire running the world's most powerful country? How scandalous!
In the USA, 2016 and onwards wasn't "just an election". It was something between a mildly harmful establishment candidate or a useless new face on one side, and "holy fucking shit are we actually letting this deranged wannabe monarch run for office?!?" on the other.
Give the man a break, it was (the start of) a crazy time, I'm actually surprised more creators didn't do something like this. If anything, it was barely even a political statement, more of a "hey fellow dems, go vote!" type thing.
No, but not understanding your audience, not being able to not divide your fanbase for absolutely no reason, and doing all that for Hillary Clinton who history will not at all be kind to…
His traffic and hot takes dropped and his influence declined to almost nothing… maybe those things aren’t unrelated?
I have packed ff with arkenfox js into container and maybe a handful other people use it https://github.com/grzegorzk/ff_in_podman. Still, most likely the IP address alone is probably the strongest part of fingerprint vector
Maybe instead it would be better to have very different vector each time?