For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | stuxnet79's commentsregister

My G4 succumbed to this issue, and I was never able to revive it. I had some important documents and images there that I hadn't yet backed up to cloud that disappeared along with it. Still very sour about that. Other than that I enjoyed the phone, felt the dimensions were perfect and the camera was good for its time. But a defect of that nature is too serious to overlook so that was the last LG phone I ever owned.

> So timed that all pretty great. What worries me is my desktop is up for a full new buy somewhere around early '28

That's a very specific date / timeline. How do you decide to do a full new buy? I ask because I own a desktop that I built 15 years ago which I was flirting with replacing completely last year, but unfortunately I didn't pull the trigger ... oops :(

My old rig is still going strong. The motherboard can only take up to 32GB DDR3 though. CPU is an Intel i7-4790k which is still very fair today if you are not running a resource hog OS (looking at you Windows). Overall it is completely serviceable for my needs. Being honest with myself the only reason I wanted to upgrade was for nerd cred but I don't game much anymore and don't do any ML tasks that require lots of local compute.


My PC is similar. I upgraded it to a 4790k a few years ago (best CPU on the socket). What's funny is I also maxed out the RAM as well because I realised two more 8GiB sticks were like £30 so why not. I thought it was a funny thing to do at the time as I didn't really need that much, but glad I did now. It's going to have to do me for many more years to come, but I'm fine with that. I don't game at all. Just have to hope nothing fails. I did build it with solid foundations: good and overprovisioned PSU, Asus mobo, so here's hoping.

Unfortunately I do also have server gear now as well. I'm going to have to really think about what I actually need now...


>That's a very specific date / timeline.

It's aimed at roughly hitting Zen 6 and switching to high refresh rate 4K gaming.

Was really hoping I could hop to ddr6 and pcie 6 too, but that seems less plausible

>How do you decide to do a full new buy?

Gut feel, it's a 2019 build that had a mid life refresh so 2028 is near a decade, which for a gaming rig is good going. Gaming also means a big VRAM GPU makes sense for toying with LLMs because I get dual use out of it. Plus I don't think the 3090 much as I like it will drive 4K high refresh.

...If it wasn't for gaming I could definite keep this rig till 2030+. It's comically overspec'd for browsing and casual dev stuff. Hell it's even got an optane boot drive so that'll last till the end of time


I suspect there's a lot of people on the 2028 refresh train. If you bought a 1700 in 2017/2018 - which a lot of people did, because it was so good $/perf, you could ride the AM4 platform to a 5900(x/xt) now and be still pretty happy, but AM4 is a dead end now and the X570 motherboards are hard to find. So, if you want more PCIe, DDR5, etc. it'll be time to jump once it starts to feel sluggish for high end tasks (gaming, etc.) around that time.

> X570 motherboards are hard to find

They're alive and well in ebay land. The SSD NAS I mentioned was a x570 ebay build because they can do full ECC and ebay was full of old AM4 gaming builds aging out.

>more PCIe,

Yeah that's the sticking point, though a good x570 can do 7x nvme and 8x sata...so plenty for a NAS if you're up for colouring outside the lines a bit.


> I'd like to say a brief thank you to what the brief, golden period of globalisation was able to bring us.

Not everyone benefited. Market globalism wasn't particularly kind to the global south, and the specific mandates that the WTO enacted on countries in latin america / africa (Washington Consensus) greatly increased local wealth disparities despite visibly growing GDP for a time.

America profited handsomely because for most of the past 30 years, it was where the (future) transnational conglomerates were based. These companies stood to benefit from the opening up of international markets. Now that these companies are being out-competed by their asian counterparts, instead of going back to the drawing board and innovating they are playing the "unfair trade practices" card and of course the current administration is on-board with it.

Globalisation is not going anywhere, but America is increasingly alienating itself from allies who it could stand to benefit from.


> but America is increasingly alienating itself from allies who it could stand to benefit from.

We're a clown show and we don't deserve to have friends until we get our shit together.


And "your shit" is spreading further and further, press conference by press conference. In the last hour or two the great man dropped this quote regarding NATO:

"We would have always been there for them, but now, based on their actions, I guess we don't have to be, do we?" Trump told the audience.

"That sounds like a breaking story? Yes, sir. Is that breaking news? I think we just have breaking news, but that's the fact. I've been saying that. Why would we be there for them if they're not there for us? They weren't there for us."

I can't imagine how long it will take to get this shit back together enough that the US can be trusted again by the international community. One responsible government just means that everything that could be built within the four years of their administration could be torn up, burnt, shat on, and buried within two weeks of a new administration.

And yet there seems to be a base 30% support for the current behaviour.

I think the first thing to try and fix is the education system.


What is the best way to archive a JS heavy site like this? I reviewed OPs github and they haven't open-sourced these visualizations probably because they are tied to his employer.


An old classic. Thanks for reminding me about this article.


> Most of Nvidia's "strength" is the heavy lifting done by the folks over at TSMC.

The software side is also important. No discussion of Nvidia's moat is complete without also mentioning CUDA.


I don't really see it worth mentioning CUDA.

It does nothing that any other compute API uses, and the majority of enterprise compute software doesn't use it and/or works on ROCm HIP with minimal performance loss.

A lot of research projects (such as all the early LLM research, given the topic) are written in Python and use libraries to shim all of that as well; PyTorch and ONNX both run natively on AMD and is covered under AMD's commercial support.

And then we come to the case of llama.cpp, which supports more APIs than any other inference engine... not only does it run on Nvidia/CUDA, it runs on AMD/HIP, Vulkan on at least 4 different vendors, SYCL on at least Intel ARC, BLAS/BLIS, Apple/Metal, Snapdragon's quasi-NPU, and Moore Threads (that new Chinese startup for domestic GPUs).

There is no reason to write greenfield code with CUDA today, and most people aren't.


> One of the most challenging aspects of being a parent (at least for me) is dealing with the kids not interested in cooperating or listening/learning.

> And just in general, giving guidance and seeing that look on their faces that means they're just waiting for you to stop talking so they can go on with their lives.

Instead of seeing this as a challenge perhaps reflect on it and take it as an opportunity to learn more about your kids. Not everybody is the same, and not everyone will have the same aspirations.

I know of many who got the full ride in terms of music lessons but do not have any intrinsic passion for it. Overall it was a total waste of time in terms of their life satisfaction and fulfillment. Me on the other hand, I was very into music but didn't have the option of getting any lessons.

FWIW I was one of those kids who was very compliant and cooperative but in retrospect I can see that it harmed my development / self-concept. It's only now as an adult that I'm able to grapple with this.


I do try to keep abreast of where they're at. It's pretty much a one-way street. C'est la vie.


> cost of significantly better languages is essentially free

Is it? We still need meatspace humans to vet what these AI agents produce. Languages like C++ / Rust etc still require huge cognitive overhead relative to Python & that will not change anytime soon.

Unless the entire global economy can run on agents with minimal human supervision someone still has to grapple with the essential complexity of getting a computer to do useful things. At least with Python that complexity is locked away within the CPython interpreter.

Also an aside, when has a language ever gotten traction based solely on its technical merits? Popularity is driven by ease-of-use, fashion, mindshare, timing etc.


Yeah that's sort of fair today, although we have switched over most of our org to Rust and it hasn't been much of a problem. The LLM can usually explain small parts of code with high accuracy if you are unsure.

Overall the switch has been very much loved. Everything is faster and more stable, we haven't seen much of a reduction in output


> Compare that to a smart engineer who doesn't have that wisdom: those people might have an easier time jumping in to difficult problems without the mental burden of knowing all of the problems upfront.

My favorite story in CS related to this is how Huffman Coding came to be [1]

[1] https://en.wikipedia.org/wiki/Huffman_coding#History


If the A18 Pro has the same ISA as the M-series chips then this may not be so straightforward. I am still hanging on to my 2020 Intel MBP for dear life because it is the only Apple device I own that allows me to run Ubuntu and Windows 11 on a VirtualBox VM.


Would you elaborate what you mean by saying Linux on an M-series chip isn't straightforward? That's not been my experience, I (and lots of other devs) use it every day, Apple supports Linux via [0], and provides the ability to use Rosetta 2 within VMs to run legacy x86 binaries?

0: https://github.com/apple/container


Clearly I'm not as knowledgable about this as I thought I was. I already have a Ubuntu x86 VM running on an Intel Mac (inside VirtualBox). Same with Windows 11. Can this tool allow me to run both VMs in an Apple Silicon device in a performant way? Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

While I have a preference for VirtualBox I'd say I'm hypervisor agnostic. Really any way I can get this to work would be super intriguing to me.


> Can this tool allow me to run both VMs in an Apple Silicon device in a performant way?

I use VMWare Fusion on an M1 Air to run ARM Windows. Windows is then able to run Windows x86-64 executables I believe through it's own Rosetta 2 like implementation. The main limitation is that you cannot use x86-64 drivers.

Similarly, ARM Linux VMs can use Rosetta 2 to run x86-64 binaries with excellent performance. For that I mostly use Rancher or podman which setup the Linux VM automatically and then use it to run Linux ARM containers. I don't recall if I've tried to run x86-64 Linux binaries inside an Linux ARM container. It might be a little trickier to get Rosetta 2 to work. It's been a long time since I tried to run a Linux x86-64 container.


Possible catch: Rosetta 2 goes away next year in macOS 27.

I don’t know what the story for VMs is. I’d really like to know as it affects me.

Sure you can go QEMU, but there’s a real performance hit there.


Not until macOS 28., but you're right, it's frustratingly unclear whether the initial deprecation is limited to macOS apps or whether it will also stop working for VMs.

https://support.apple.com/en-us/102527

https://developer.apple.com/documentation/virtualization/run...


This can be avoided by not upgrading to MacOS 28 right? I'm new to Mac's and the Apple release schedule so I'm not sure how mandatory the annual updates are.


Does Apple Silicon support VMs within VMs?

What if you run MacOS 27 in a VM, and then run the x86-hosting VM inside that?


It would be pretty difficult for Apple to disable Rosetta for VMs.


How so?


It doesn’t require anything from the host


The Apple documentation for using the Virtualization framework with ARM Linux VMs to run x86_64 binaries requires Rosetta to be installed:

https://developer.apple.com/documentation/virtualization/run...

So you must be talking about something else, perhaps ARM Windows VMs which use their own technology for running x86 binaries[^1].

In any case, please elaborate instead of being so vague. Thanks.

[^1]: https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...


You can just splat whatever support files it needs into the VM there isn't anything special about them. In fact you can copy them onto a different (non-Mac) device and use them there too


It never existed.



Oh I have another year? Phew.


> Last I checked VirtualBox on Apple Silicon only permits the running of ARM64 guests.

I used to use VirtualBox a lot back in the day. I tried it recently on my Mac; it's become pretty bloated over the years.

On the other hand, this GUI for Quem is pretty nice [1].

[1]: https://mac.getutm.app


Run ARM64 Linux and install Rosetta inside it. Even on the MacBook Neo it'll be faster than your 2020 Intel Mac.


https://github.com/abiosoft/colima

This is a super easy way to run linux VMs on Apple Silicon. It can also act as a backend for docker.


Pay Parallels for their GPU acceleration that makes Arm windows on apple silicon usable.


The instruction set is not the issue, the issue is on ARM there's no standardized way like on x86 to talk to specialized hardware, so drivers must be reimplemented with very little documentation.


That has nothing to do with running VMs.


As long as you're ok with arm64 guests, you can absolutely run both Ubuntu and Win11 VMs on M-series CPUs. Parallels also supports x86 guests via emulation.


> As long as you're ok with arm64 guests

I've run amd64 guests on M-series CPUs using Quem. Apple's Rosetta 2 is still a thing [1] for now.

[1]: https://support.apple.com/en-us/102527

[2]: https://mac.getutm.app


How is the performance when emulating the x86 architecture via parallels?

Also is it possible to convert an existing x86 VM to arm64 or do I just have to rebuild all of my software from scratch? I always had the perception that the arm64 versions of Windows & Ubuntu have inferior support both in terms of userland software and device drivers.


Same Armv8 ISA. And it's the same ISA Android Linux has run on for over a decade.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You