For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more samtheprogram's commentsregister

There’s benchmarks in the README. Python is ~10x faster. It’s heavily optimized. Based on the numbers and my experience with Flux.1, I’m guessing the Python run is JIT’d (or Flux.2 is faster), although it’d likely only be ~half as fast if it weren’t (i.e. definitely not 10x slower).


There are a lot of shortcomings in the current implementation, making it slow (but in my tree is 2x faster as we speak). For instance activations aren't taken in the GPU, kernels are not fused, flash attention is not used, and many other issues. Now I'll focus on that changes to approach PyTorch numbers a little bit more.


For a version of macOS that old, you’d probably want a dmg, which you can create with createinstallmedia if you have the Install macOS.app. Not sure if it’s supported with Lume as it’s the first time I’ve heard of it.


I’m just using their API tokens instead of Max. If usage gets out of control, then fine, I might need to look into an alternative. But I’ve grown very accustomed to the model and would rather not switch until I find a real need to.


I’m sorry, but the release of Plasma, around the same time IIRC, was not without controversy.


KDE 4.0 - which introduced plasma - was released in 2006, and it was awful and wasn't supposed to be generally available (blame the distros and/or poor version naming). By version 4.5 (2010), KDE had stabilized. By the time Gnome 3 and Windows 8 were released in 2011/2012 respectively, KDE plasma was pleasant to use and rock-solid

It felt great to watch Gnome stumble after all the shit-talking, some schadenfreude was in order. I didn't care much for Windows 8; Vista was a the bigger mess of a release.


But, come on, a WHOLE OTHER LEVEL of "controversy."

Plasma criticism was pointed and deliberate and grownup. Windows 8, less so.


This is one-shot.


When an API returns JSON, your JS framework can decide what to do with it. If its returning HTML that's intended to go in a particular place on a page, the front-end has far less flexibility and pretty much has to put it in a specific place. Hence why they said endpoints can return 3-5 different versions of HTML.


> When an API returns JSON, your JS framework can decide what to do with it.

The JS framework is the frontend, so you're still coordinating.

> If its returning HTML that's intended to go in a particular place on a page, the front-end has far less flexibility and pretty much has to put it in a specific place.

Well yes, because presumably that's what the app is supposed to do. If it's not supposed to put it in that place, why would that be the specified target?

If this kind of static assignment of targets is not flexible enough for some reason, then use OOB updates which lets you replace fragments by id attribute. That lets you decouple some of these kinds of decisions.

Although "endpoints can return 3-5 different versions of HTML" is also a bit of a red flag that you're not using htmx correctly, generally endpoints should be returning 1, maybe 2 fragments in unusual cases.

In any case, you might find DataStar more to your liking, it's partway between React and htmx.


To clarify, there's nothing React or SPA about datastar. Moreover, HTMX v4 is essentially Datastar-lite (but heavier, and less capable)


There is, Datastar has client-side rendering based on signals [1]. Datastar is also very explicitly designed to be modular and extensible, so you can extend the client-side with more features, as they've done with Web Components.

[1] https://data-star.dev/guide/reactive_signals


Signals aren't even really "rendering". And react and spas dont have a monopoly on doing things on the client - that's just javascript. I dare you to go to the datastar discord and tell them that they're React-adjacent, and SPA-like


"framework can decide what to do with it" sounds like a feature (more flexibility) but is actually often the source of errors and bugs.

A single, consistent, canonical response, generated by the server, taking into account all relevant state (which is stored on the server) is much cleaner. It's deterministic and therefore much more testable, predictable and easier to debug.

For pure UI-only logic (light/dark mode, etc) sure you can handle that entirely client-side, but my comment above applies to anything that reads or writes persistent data.


Wow, Slax is still around and supports Debian now too? Thanks for sharing.


I used to use it during the netbook era, was great for that.


I never minimize windows. They are almost always full screen or split screen (unless I'm quickly grabbing a file in a Nautilus/Finder/Explorer window), and I just hide windows if I really need to. The same is true for macOS.

Forget what I do in Windows, been a couple years since I daily drove it.

I wonder if that's because I've used macOS and Gnome more than Windows for the last decade -- because its confusing as hell to cmd/alt-tab back to an app or click it in the dock and for its window to not appear because you minimized it rather than hiding it. When that happens, it usually takes 30 seconds until I realize the app is hiding in the task bar or dock.


YOU should be writing your commit messages, not an AI.

You can always generate a new commit message (or summary, alternative summary, etc) down the road with AI. You can never replace your mind being in the thick of a changeset.


The author of the commit doesn't matter per se. If someone is just having AI summarize their changes and using that as the commit message, I agree that they're doing it wrong.

These days, lots of my commit messages are drafted by AI after having chatted at length about the requirements. If the commit message is wrong or incomplete, I'll revise it by hand or maybe guide the AI in the right direction. That tends to be a much more useful and comprehensive description of the commit's intent than what I would naturally find worthwhile to write on my own.

OP's approach is interesting as well, at least in principle, and if it works well it might be the next best option in the absence of a chat log. It should just make sure to focus on extracting the "why" more than describing the "what".


> than what I would naturally find worthwhile to write on my own.

I take issue with that statement. There's nothing "natural" about documentation. You're not "naturally disposed" to writing a certain level of documentation. It's a skill and a discipline. If you don't think it's worthwhile to write documentation, that's not a "natural failing". You're making a judgment, and any missing documentation is an error in judgment.


I meant "natural" in a context of having more urgent immediate priorities than extensively detailed documentation at the commit level — not an error in judgement, just a tradeoff.

If a given project has time/budget to prioritize consistent rigorous documentation, of course it should consider doing so. AI's ability to reduce the cost of doing so is a good thing.


If we assume, as many do, that we are going to delegate the work of "understanding the code" to AI in the coming years, this surely becomes even more important.

AI writing code and commit messages becomes a loop divorced from human reasoning. Future AIs will need to read the commit history to understand the evolution of the code, and if they're reading poor summaries from other AIs it's just polluting the context window.

Commit messages are documentation for humans and machines.


I have completely opposite opinion on this.

Writing commit messages is one of these mundane chores I’d gladly delegate to LLMs which are very very good at this kind of thing.

I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.


Sounds like you haven't been working long enough to forget your decisions, which you WILL do eventually. In such cases, where you're looking at code you wrote 10 years ago and you find a weird line, when you view the git blame and read the commit message, you'll be very thankful that you explain not just "what" you did, but "why" you did this, something an AI will have a very hard time doing.

You don't have to if you don't want to, but if you think "this commit message is just a summary of the changes made", you'll never write a useful commit message.


I’ve been working in the industry for two decades, and I think commit messages is not the best place for storing decisions and associated context. I personally prefer ADRs.


Two decades and you don't see any value in writing down what's currently in your head?

Anyhow, ADRs are good, but they stand for Architectural decisions, not every decision is at that level.

In general, if there's a better place to store explanations, do use it, but often, in many projects, commit messages are the least bad place; and it's enormously better to write there than nowhere at all.


Line comments work pretty well for explanation storage. :) Almost every programming language support that feature.


They can sure be good, and are often indeed a better place, but unfortunately most people don't want too many lines of comments in the code.

We sure should have better tools to handle code annotations, but commit messages are a not too bad fallback until we get them


That’s why you put the comment in the code


comments rot as the code changes around them


Neither the code nor the AI know WHY a commit it being made.

This context should at the very least be linked.


For 99% of all commits the WHY is fully answered by the diff itself.


Hard disagree, though it’s probably dependent on your domain and/or how expressive your test suite is.

But even if that were true, reading a two liner explanation is very obviously more time efficient than reviewing a whole commit diff.

Super common case: you got a subtle bug causing unexpected values in data. You know from the db or logs that it started on 2025-03-02. You check the deployment(s)of that day and there are ~20 of them.

You can quickly read 20 lines in the log and have a good guess of which is likely to be related or go for a round of re-reviewing 20 multi file pull requests and reverse engineer the context from code.


"Super common case" is "initial commit", "fix spelling in README.md", or "small refactor". If your "super common case" is "subtle bug causing unexpected values in data" then you are doing something very, very wrong.


Man, 99% of non-bug-fix commits don't have a why other than "advance the current task".

Almost all commits live in tandem with some large feature or change being made. The reason for absolutely all of them is the same - build the thing .


>other than "advance the current task"

How do you expect someone to know what “the current task” was when they’re tracking down a bug 2 years down the line?


Then write that and link to the current task. That's the why. You don't need an LLM for that.


Perhaps this is about commit granularity. If keeping the history about advancing the task is not useful, then I’d merge these commits together before merging the PR; in some workflows this is set up to happen automatically too.


Maybe what we need is a pre-commit hook that prefixes every commit with the name of the branch it's being made onto


I agree in principle, but in practice, it's horrible right now.

Most AI generate commit messages and PR descriptions are much too verbose and contain 0 additional informational value that couldn't be parsed from the code directly. Most of the time I'd rather read 2 sentences written by a human than a wall of text with redundant information.


> I mean, if you really know you code, you know it, there is no much value in reinforcing it in your head one more time via writing comprehensive commit messages - it’s a waste of time, imho.

I know the code...when I write it. But 2 weeks later all the context is gone, and that's just _for me_. For my colleagues who also have to be in that code, they don't even start with context.

I mean do what works for you, but understand the bulk of the work that this applies to is for >1 person shops with code bases too big to fit in ones head at all, much less for more than a day or so.


There's no point unless a critical mass of people use these tools. You will be the only one on your IP address using this configuration of masked fingerprinting, which is itself a fingerprint.

That's also why it's indeed useful when using Tor, because you're not identified by your base IP.

Unless we make this part of the culture, you have basically 0 recourse to browser fingerprinting except using Tor. Which can itself still be a useful fingerprint depending on the context.

EDIT: I'll add that using these tools outside of normal browsing use can be useful for obfuscating who's doing specific browsing, but it should be emphasized that using fingerprinting masking in isolation all the time is nearly as useful as not using them at all.


Basically the XKCD license plate comic: https://xkcd.com/1105/


Has anyone wrote software that automatically surfaces the relevant XKCD comic for every article this happens under?

I’d like a feature in my HN reader that sticks a red button at the bottom anytime XKCD has already made the points I’m reading.


Randal had a long career of good takes, until around 2016 when they stopped being objectively good.

I’m not kidding at all, that my guess is he was doing drugs and stopped.


Wow. Not liking their political views equals doing drugs.

Must be an interesting place, that originates these "arguments".


I think you misread. They said the opposite: that he was taking drugs and his takes were good, then he stopped taking drugs and now they’re bad.

I’m not saying I agree or that I even think his takes have gotten worse, just clarifying what the other poster said.


1357 (2014-04-18) is pretty bad.

(Bonus points for the alt-text argument being isomorphic to nothing-to-hide.)


He was effectively years early to the “if you don’t like how twitter is run, build your own <smug face>” interesting how that argument isn’t used anymore.


As far as "obligatory xkcd" is concerned: 3154, 3155, 3159, 3160, 3162, 3165 and 3167 are all relevant. (I've found myself citing 3155 a lot, in attempts to deradicalise cranks: it sometimes works, if I can convince them to quit ChatGPT cold-turkey.)

It's fine to like the comics before around 2016, and dislike the ones afterwards, but there's nothing objective about that. Various people have put forward various thresholds for when xkcd "stopped being good", but ultimately it boils down to a combination of what TV Tropes would call "Tone Shift" and "They Changed It, Now It Sucks!".


> Randal had a long career of good takes, until around 2016 when they stopped being objectively good.

Specifically it was at this point in 2016: https://xkcd.com/1756/

> I’m not kidding at all, that my guess is he was doing drugs and stopped.

I don’t know if he stopped or started, but something changed.


A person used their relatively large platform to tell people that they don't support a crazy lunatic millionaire running the world's most powerful country? How scandalous!

In the USA, 2016 and onwards wasn't "just an election". It was something between a mildly harmful establishment candidate or a useless new face on one side, and "holy fucking shit are we actually letting this deranged wannabe monarch run for office?!?" on the other.

Give the man a break, it was (the start of) a crazy time, I'm actually surprised more creators didn't do something like this. If anything, it was barely even a political statement, more of a "hey fellow dems, go vote!" type thing.


Couldn’t have said it better. But hey, they made their bed…


Coming our against a candidate that literally said the words "grab them by the pussy" is a bad thing?


No, but not understanding your audience, not being able to not divide your fanbase for absolutely no reason, and doing all that for Hillary Clinton who history will not at all be kind to…

His traffic and hot takes dropped and his influence declined to almost nothing… maybe those things aren’t unrelated?


We should go further and make an AI agent that creates counterfeit XKCD comics of dubious quality for literally every scenario.


I have packed ff with arkenfox js into container and maybe a handful other people use it https://github.com/grzegorzk/ff_in_podman. Still, most likely the IP address alone is probably the strongest part of fingerprint vector Maybe instead it would be better to have very different vector each time?


> "There's no point unless a critical mass of people use these tools"

That's what Mullvad Browser attempts to solve i guess:

https://mullvad.net/en/browser


You can and should be using a good VPN to make the masked fingerprint and IP address non-unique.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You