For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | metalliqaz's commentsregister

Sad news. I'm sitting at a modified Filco keyboard with custom firmware right now. Its sound profile is not very pleasing by today's standards but it has been a reliable workhorse.

A lot of money, but disposable? HCOL takes up the slack in so many cases.

I wouldn't call that 'Christian'. The 'extraordinary work ethic' exists in Japan, too. Not very Christian over there.

It reads like it received no proof-reading or editing, and it looks like it was vibe coded.

Intern project?


I once saw a talk given by a lawyer on exactly this topic. It was a long time ago, unfortunately I won't be able to find it. Anyway, the takeaway is that there are plenty of Federal laws that are written in such a way that there is incredible room for interpretation by prosecutors. Vagueness and overbroad language to the point that indeed they can come up with some kind of crime pretty much any time they want to.

On the other hand, that kind of thing would not only be enough to bring a case. They use that kind of power to enhance their case against people they know are real criminals. Of course, the more the Justice Department becomes captured by bad actors, the less this applies.


You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.


> I both use AI and have very strong anti-AI sentiment.

Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.

At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.

And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.

I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.


I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.


someone has some l33t sk1llz


How about early career workers, 18-25? Are we just pulling up the ladder behind us and leaving them to toil in the mud?


The biggest issue is tertiary (post-secondary) education and its effect on the LFPR in that age range, but not the prime age LFPR.


it's a really nice idea, but of course completely antithetical to the business model of modern social media platforms. So, it will never go anywhere. HN might be the only locale with any real numbers that I could see actually using it. Even BlueSky I think could never risk something like this.

as an interesting thought experiment, consider the questions that TruthSocial would put in. would an average unsophisticated user be able to tell the difference between your product and a hopelessly biased version such as that? they would support the correct answers with their own misinformation. Would it be just another schism of reality?


better than Vulkan?


In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though.


Perhaps I should just google it, but I'm under the impression that ollama uses llama.cpp internally, not the other way around.

Thanks for that data point I should experiment with ROCm


From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.


I meant ollama uses llama.cpp internally. Sorry for the confusion.


For me Vulkan performs better on integrated cards, but ROCm (MIGraphX) on 7900 XTX.


As I understand it, it depends on your GPU and ROCm version but they're similar-ish


[flagged]


I was talking about ROCm vs Vulkan. On AMD GPUs, Vulkan has been commonly recognized as the faster API for some time. Both have been slower than CUDA due to most of the hosting projects focusing entirely on Nvidia. Parent post seemed to indicate that newer ROCm releases are better.


Yes, Vulkan is currently faster due to some ROCm regressions: https://github.com/ROCm/ROCm/issues/5805#issuecomment-414161...

ROCm should be faster in the end, if they ever fix those issues.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You