For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | fluoridation's commentsregister

"Shocolate"? Who says it like that?

People speaking languages other than English.

We're speaking English, so why even entertain the idea of pronouncing "axolotl" differently, in that case? The Japanese say "en", but that doesn't seem to inspire anyone else not to say "yen".

That's because in English we get it via Spanish, which doesn't have ʃ (although interestingly, it was just in the process of losing that sound in the early 17th century). If we're going from Nahuatl direct to English, and the Nahuatl sound also exists in English, then you may as well just use the correct sound. Otherwise, what are you going to do with Xochimilco?

>That's because in English we get it via Spanish

The misconception is that words enter "a language" and not individual people's minds. Most English speakers have never heard the word "axolotl" spoken in its original pronunciation, nor are they familiar with the orthography that spells a "sh" with X.

>Spanish, which doesn't have ʃ (although interestingly, it was just in the process of losing that sound in the early 17th century).

I don't know about 17th century, but some dialects of Spanish certainly do have that sound now.

>Otherwise, what are you going to do with Xochimilco?

In English, X at the start of a word is typically pronounced like a Z, as in "Xanadu", "Xanax", and "xylophone". I don't think anyone would bat an eye if you read it as "Zochimilco".


It’s not a misconception that the English word ‘chocolate’ exists and that there’s a particular history of how that came to be the case. I think, reading the thread again, I didn’t make it clear that the sentence you quoted was talking about the history of ‘chocolate’ and not ‘axolotl’.

If pronouncing Xochimilco according to English orthographic conventions is important to you as a matter of principle, then of course you can do it. But it’s a Mexican place name that has a canonical pronunciation that is not difficult for English speakers to approximate, so I can’t really see the point.

(And yes, ʃ does exist in some modern dialects of Spanish, but those aren’t the dialects that would influence the pronunciation of Spanish to English loan words in most cases. The interesting thing is that this was much less obviously the case in the early 1600s. Apparently the exact origin of ‘chocolate’ in Spanish is a bit of a complex historical linguistic puzzle.)


>If pronouncing Xochimilco according to English orthographic conventions is important to you as a matter of principle

No, not to me. I speak Spanish natively, but even I don't know how to say that. My first guess would be "Jochimilco", but I'd have to look it up (I'm not going to). I'm just saying that having Xs in weird places would not stop an English speaker from inventing a "wrong" pronunciation on the spot.

>But it’s a Mexican place name that has a canonical pronunciation that is not difficult for English speakers to approximate, so I can’t really see the point.

"Mexico" itself is also not difficult for English speakers to approximate, yet they don't. Clearly approximating the local pronunciation is not how foreign speakers decide how to pay toponyms, and that's fine. That's how languages are shaped.

My point is just that it makes no sense to get hung up on speakers not pronouncing loanwords "correctly". If we're going down this path, we should also complain that Spanish speakers write "fútbol" instead of "football", and that tea is called "tea" instead of "cha" and spelled "荼". We should demand that words be crystallized in their pronunciation and orthography when they cross language barriers.


Not really - it is [t͡ʃ] (“ch”) not [ʃ] (“sh”).

Auf Deutsch, Schokolade. /ʃoko/, per https://en.wiktionary.org/wiki/Schokolade.

Any self-respecting Aztecophile. They're also the cause of startup names dropping a vowl. Insufferable.

I'd accept the epithet of "nerd", and I've never played a board game (assuming Baldur's Gate 3 doesn't count). Especially among the more complex ones, I don't see why someone would rather run the rules manually instead of letting a computer take care of the bookkeeping and just play.

Doesn't braille already cover this need?

The question is whether such interference could be created by a device as a by-product of its normal operation, not by a weapon that's intended to cause harm.

Not "not use it". Not use it to make people believe they're talking to real people.

The crazy thing is how effort posts went from the most valuable part of this site to the most hateful part of the site, by the very people claiming to be protecting the site

That's insane. I bought two in December for ARS 1.2M (a little less than USD 1000). Maybe OpenClaw raised the demand.

>There is a potential that a person will have to repeat a course if it is a requirement for their degree.

How is that not high stakes? The result of several months worth of effort hinges on what they do during a 2-3-hour window. If you had to build something and the last step involved a procedure that could potentially tear down everything you made, you would try to find a way to redesign it so that didn't happen, or to limit the scope of the damage.


>you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'.

Sounds like you're saying that it's acceptable to be a little foggy about the limits of your knowledge, as long as you remember the core foundations. For a first year medicine student, the edge of their knowledge will include things that are the core foundation of a practicing doctor. Why should such a student be tested as if he already had several years of familiarity with the subject when this is all relatively new material to him?


Quad channel memory is not common on consumer desktops, it's a strictly HEDT and above feature. The vast majority of consumer desktops have 2 channels or fewer.

One should no longer use the word "channel" because the width of a channel differs between various kinds of memories, even among those that can be used with the same CPU (e.g. between DDR and LPDDR or between DDR4 and DDR5).

For instance, now the majority of desktops with DDR5 have 4 channels, not 2 channels, but the channels are narrower, so the width of the memory interface is the same as before.

To avoid ambiguities, one should always write the width of the memory interface.

Most desktop computers and laptop computers have 128-bit memory interfaces.

The cheapest desktop computers and laptop computers, e.g. those with Intel Alder Lake N/Twin Lake CPUs, and also many smartphones and Arm-based SBCs, have 64-bit memory interfaces.

Cheaper smartphones and Arm-based SBCs have 32-bit memory interfaces.

Strix Halo and many older workstations and many cheaper servers have 256-bit memory interfaces.

High-end servers and workstations have 768-bit or 512-bit memory interfaces.

It is expected that future high-end servers will have 1024-bit memory interfaces per socket.

GPUs with private memory have usually memory interfaces between 192-bit and 1024-bit, but newer consumer GPUs have usually narrower memory interfaces than older consumer GPUs, to reduce cost. The narrower memory interface is compensated by faster memories, so the available bandwidth in consumer GPUs has been increased much slower than the increase in GDDR memory speed would have allowed.


>now the majority of desktops with DDR5 have 4 channels, not 2 channels

Source? I just looked up two random X870E boards from Gigabyte and both are dual channel.

>To avoid ambiguities, one should always write the width of the memory interface.

They're incomparable quantities. More channels support more parallel operations, while a wider bus at a constant frequency supports higher throughput.

The bus width is not even that useful of a metric. It's more useful to talk about bits per second, which is the product of bus width and frequency.


Sadly motherboards, tech journalist, and many other places confuse the difference between a dimm and channel. The trick is the DDR4 generation they were the same, 64 bits wide. However a standard DDR5 dimm is not 1x64 bit, it's actually 2x32 bit. Thus 2 DDR5 dimms = 4 channels.

For some workloads the extra channels help, despite having the same bandwidth. This is one of the reasons that it's possible for a DDR5 system to be slightly faster than a DDR4 system, even if the memory runs at the same speed.


>However a standard DDR5 dimm is not 1x64 bit, it's actually 2x32 bit. Thus 2 DDR5 dimms = 4 channels.

Uh, surely that depends on how the motherboard is wired. Just because each DIMM has half the pins on one channel and the other half on another, doesn't mean 2 DIMM = 4 channels. It could just be that the top pins over all the DIMMs are on one channel and the bottom ones are on another.


I think there's a standard wiring for the dimm and some parts are shared. Each normal ddr5 dimm has 2 sub channels that are 32 bits each, and the new specification for the HUDIMM which will only enable 1 sub channel and only have half the bandwidth.

I don't think you can wire up DDR5 dimms willy nilly as if they were 2 separate 32 bit dimms.


Well, I don't know what to tell you. I'm not a computer engineer, but I assume Gigabyte has at least a few of those, and they're labeling the X870E boards with 4 DIMMS as "dual channel". I feel like if they were actually quad channel they'd jump at the chance to put a bigger number, so I'm compelled to trust the specs.

In computer manufacture speak dual channel = 2 x 64 bit = 128 bits wide.

So with 2 dimms or 4 you still get 128 bit wide memory. With DDR4 that means 2 channels x 64 bit each. With DDR5 that means 4 channels x 32 bit each.

Keep in mind that memory controller is in the CPU, which is where the DDR4/5 memory controller is. The motherboards job is to connect the right pins on the DIMMs to the right pins on the CPU socket. The days of a off chip memory controller/north bridge are long gone.

So if you look at an AM5 CPU it clearly states:

   * Memory Type: DDR5-only (no DDR4 compatibility).

   * Channels: 2 Channel (Dual-Channel).

   * Memory Width: 2x32-bit sub-channels (128-bit total for 2 sticks).

Why are you quoting something that contradicts you? It clearly states it's a dual channel memory architecture with 32-bit subchannels. The fact the two words are used means they mean different things.

>In computer manufacture speak dual channel = 2 x 64 bit = 128 bits wide.

Yes, because AMD64 has 64-bit words. You can't satisfy a 64-bit load or store with just 32 bits (unless you take twice as long, of course). That you get 4 32-bit subchannels doesn't mean you can execute 4 simultaneous independent 32-bit memory operations. A 64-bit channel capable of a full operation still needs to be assembled out of multiple 32-bit subchannels. If you install a single stick you don't get any parallelism with your memory operations; i.e. the system runs in single channel mode, the single stick fulfilling only a single request at a time.


AM5 is the AMD standard, it's accurate, seems rather pedantic to differentiate between 2 sub channels per dimm and saying 4 32 bit channels for a total of 128 bit.

However the motherboard vendors get annoyingly hide that from you by claiming DDR4 is dual channel (2 x 64 bit which means two outstanding cache misses, one per channel) and just glossing over the difference by saying DDR5 dual channel (4 x 32 bit which means 4 outstanding cache misses).

> Yes, because AMD64 has 64-bit words.

It's a bit more complicate than that. First you have 3 levels of cache, the last of which triggers a cache line load, which is 64 bytes (not 64 bits). That goes to one of the 4 channels, there's a long latency for the first 64 bits. Then there's the complications of opening the row, which makes the columns available, which can speed up things if you need more than one row. But the general idea is that you get at the maximum one cache line per channel after waiting for the memory latency.

So DDR4 on a 128 bit system can have 2 cache lines in flight. So 128 bytes * memory latency. On a DDR5 system you can have 4 cache lines in flight per memory latency. Sure you need the bandwidth and 32 bit channels have half the bandwidth per clock, but the trick is the memory bus spends most of it's time waiting on memory to start a transfer. So waiting 50ns then getting 32bit @ 8000 MT/sec isn't that different than waiting 50ns and getting 64 bit @ 8000MT/sec.

Each 32 bit subchannel can handle a unique address, which is turned into a row/column, and a separate transfer when done. So a normal DDR5 system can look up 4 addresses in parallel, wait for the memory latency and return a cache line of 64 bytes.

Even better when you have something like strix halo that actually has a 256 bit wide memory system (twice any normal tablet, laptop, or desktop), but also has 16 channels x 16 bit, so it can handle 16 cache misses in flight. I suspect this is mostly to get it's aggressive iGPU fed.


There's a more general formulation, which is that every value but one must appear even numbers of times, and the one must appear some odd number of times.

How about every value appears 0 mod n times except one which appears 1 mod n times? :)

Solution: xor is just addition mod 2. Write the numbers in base n and do digit-wise addition mod n (ie without carry). Very intuitive way to see the xor trick.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You