Sad news. I'm sitting at a modified Filco keyboard with custom firmware right now. Its sound profile is not very pleasing by today's standards but it has been a reliable workhorse.
I once saw a talk given by a lawyer on exactly this topic. It was a long time ago, unfortunately I won't be able to find it. Anyway, the takeaway is that there are plenty of Federal laws that are written in such a way that there is incredible room for interpretation by prosecutors. Vagueness and overbroad language to the point that indeed they can come up with some kind of crime pretty much any time they want to.
On the other hand, that kind of thing would not only be enough to bring a case. They use that kind of power to enhance their case against people they know are real criminals. Of course, the more the Justice Department becomes captured by bad actors, the less this applies.
You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.
People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.
> I both use AI and have very strong anti-AI sentiment.
Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.
At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.
And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.
I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.
I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.
it's a really nice idea, but of course completely antithetical to the business model of modern social media platforms. So, it will never go anywhere. HN might be the only locale with any real numbers that I could see actually using it. Even BlueSky I think could never risk something like this.
as an interesting thought experiment, consider the questions that TruthSocial would put in. would an average unsophisticated user be able to tell the difference between your product and a hopelessly biased version such as that? they would support the correct answers with their own misinformation. Would it be just another schism of reality?
In my experience using llama.cpp (which ollama uses internally) on a Strix Halo, whether ROCm or Vulkan performs better really depends on the model and it's usually within 10%. I have access to an RX 7900 XT I should compare to though.
From what I understand, ROCm is a lot buggier and has some performance regressions on a lot of GPUs in the 7.x series. Vulkan performance for LLMs is apparently not far behind ROCm and is far more stable and predictable at this time.
I was talking about ROCm vs Vulkan. On AMD GPUs, Vulkan has been commonly recognized as the faster API for some time. Both have been slower than CUDA due to most of the hosting projects focusing entirely on Nvidia. Parent post seemed to indicate that newer ROCm releases are better.
reply