For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | MobiusHorizons's commentsregister

I think you are conflating microcode with micro-ops. The distinction into the fundamental workings of the CPU is very important. Microcode is an alternative to a completely hard coded instruction decoder. It allows tweaking the behavior in the rest of the CPU for a given instruction without re-making the chip. Micro-ops are a way to break complex instructions into multiple independently executing instructions and in the case of x86 I think comparing them to RISC is completely apt.

The way I understand it, back in the day when RISC vs CISC battle started, CPUs were being pipelined for performance, but the complexity of the CISC instructions most CPUs had at the time directly impacted how fast that pipeline could be made. The RISC innovation was changing the ISA by breaking complex instructions with sources and destinations in memory to be sequences of simpler loads and stores and adding a lot more registers to hold the temporary values for computation. RISC allowed shorter pipelines (lower cost of branches or other pipeline flushes) that could also run at higher frequencies because of the relative simplicity.

What Intel did went much further than just microcode. They broke up the loads and stores into micro-ops using hidden registers to store the intermediates. This allowed them to profit from the innovations that RISC represented without changing the user facing ISA. But internal load store architecture is what people typically mean by the RISC hiding inside x86 (although I will admit most of them don't understand the nuance). Of course Intel also added Out of Order execution to the mix so the CPU is no longer a fixed length pipeline but more like a series of queues waiting for their inputs to be ready.

These days high performance RISC architectures contain all the same architectural elements as x86 CPUs (including micro-ops and extra registers) and the primary difference is the instruction decoding. I believe AMD even designed (but never released) an ARM cpu [1] that put a RISC instruction decoder in front of what I believe was the zen 1 backend.

[1]: https://en.wikipedia.org/wiki/AMD_K12


That's not really how it works. There are only a few companies on the planet that are licensed to create their own cores that can run ARM instructions. This is an artificial constraint, though and at present China is (as far as I know) cut off from those licenses. Everyone else that makes ARM chips is taking the core design directly from ARM integrating it with other pieces (called IP) like IO controllers, power management, GPU and accelerators like NPUs to make a system on a chip. But with RISC-V lots of Chinese companies have been making their own core designs, that leads to flexibility with design that is not generally available (and certainly not cost effective) on ARM.

Compiling the code is not the issue. The hard part is the system integration. Most notably the boot process and peripherals. It's not actually hard to compile code for any given ARM or x86 target. Even much less open ecosystems like IBM mainframes have free and open source compilers (eg GCC). The ISA is just how computation happens. But you have to boot the system, and get data in and out for the system to be actually useful, and pretty much all of that contains vendor specific quirks. Its really only the x86 world where that got so standardized across manufacturers, and that was mostly because people were initially trying to make compatible clones of the IBM PC.

There will always be many gaps in peoples knowledge. You start with what you need to understand, and typically dive deeper only when it is necessary. Where it starts to be a problem in my mind is when people have no curiosity about what’s going on underneath, or even worse, start to get superstitious about avoiding holes in the abstraction without the willingness to dig a little and find out why.


I mean you can always make things slower. There are lots of non-optimizing or low optimizing compilers that are _MUCH_ faster than this. TCC is probably the most famous example, but hardly the only alternative C compiler with performance somewhere between -O1 and -O2 in GCC. By comparison as I understand it, CCC has performance worse than -O0 which is honestly a bit surprising to me, since -O0 should not be a hard to achieve target. As I understand it, at -O0 C is basically just macro expanding into assembly with a bit of order of operations thrown in. I don't believe it even does register allocation.


That would be true of one using a libc, but in a boot sector, you only have the bios, so the atoi being referenced is the one defined in c near the beginning of the article


Ah, I somehow skipped over that exact code block on first read.


The “fancy jump” is the branch instruction. As far as I know all ISAs have them. Even rv32i which is famously minimal has several branch instructions in addition to two forms of unconditional jump. Branches are typically used to construct if / for / while as well as && and || (because of short circuiting) and ternary (although some architectures may have special instructions for that that may or may not be faster than branches depending on the exact model). Without it you would have to use computed goto with a destination address computed without conditional execution using constant time techniques.


Solar roofs are much more practical to be honest.


Putting solar roofs on a building? For a car company?


There's a synergy effect here - Tesla sells you a solar roof and car bundle, the roof comes without a battery (making it cheaper) and the car now gets a free recharge whenever you're home (making it cheaper in the long term).

Of course that didn't work out with this specific acquisition, but overall it's at least a somewhat reasonable idea.


In comparison to datacenters in space yes. Solar roofs are already a profitable business, just not likely to be high growth. Datacenters in space are unlikely to ever make financial sense, and even if they did, they are very unlikely to show high growth due to continuing ongoing high capital expenses inherent in the model.


I think a better critique of space-based data centres is not that they never become high growth, it's just that when they do it implies the economy is radically different from the one we live in to the degree that all our current ideas about wealth and nations and ownership and morality and crime & punishment seem quaint and out-dated.

The "put 500 to 1000 TW/year of AI satellites into deep space" for example, that's as far ahead of the entire planet Earth today as the entire planet Earth today is from specifically just Europe right after the fall of Rome. Multiplicatively, not additively.

There's no reason to expect any current business (or nation, or any given asset) to survive that kind of transition intact.


For an electrification company.


It's obviously a pretty weird thing for a car company to do, and is probably just a silly idea in general (it has little obvious benefit over normal solar panels, and is vastly more expensive and messy to install), but in principle it could at least work, FSOV work. The space datacenter thing is a nonsensical fantasy.


Not physics defying, just economically questionable.

The main benefits to being in space are making solar more reliable and no need to buy real estate or get permits.

Everything else is harder. Cooling is possible but heavy compared to solar, the lifetimes of the computer hardware will probably be lower in space, and will be unserviceable. The launch cost would have to be very low, and the mean time between failure high before I think it would make any economical sense.

It would take a heck of a lot of launches to get a terrestrial datacenter worth of compute, cooling and solar in orbit, and even if you ship redundant parts, it would be hard to get equivalent lifetimes without the ability to have service technicians doing maintenance.


For the bridge, I love how it added a bunch of electrical wires along the top. Imo that’s not very realistic, given there are tons of better places to run wires on a bridge, but somehow it does look substantially more realistic. Even though it seems to be trying to make everything look sad I honestly find the results more inviting because they look lived in.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You