For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | drt1245's commentsregister


also, this video features a 486 cpu because linux dropped 386 support a while ago.

https://www.zdnet.com/article/good-bye-386-linux-to-drop-sup...


Anyone know what in particular the 486SX had that the 386 didn’t to make keeping only the former alive practical? Unfortunately the kernel mailing list link in the zdnet article appears to be rotted away.


I believe 386 did not have certain "atomic" instructions like CMPXCHG or XADD. So in order to support 386, the kernel had to have special versions of all locking primitives just for 386.


Here's a working link: http://lkml.iu.edu/hypermail/linux/kernel/1212.1/01152.html

From that link:

  x86, 386 removal: Remove CONFIG_CMPXCHG
  x86, 386 removal: Remove CONFIG_XADD
  x86, 386 removal: Remove CONFIG_BSWAP
  x86, 386 removal: Remove CONFIG_INVLPG
  x86, 386 removal: Remove CONFIG_X86_WP_WORKS_OK
  x86, 386 removal: Remove CONFIG_X86_POPAD_OK


Basic multitasking (CMPXCHG) and cache management instructions.


I was always under the impression that Linux 386 simply meant 32 bit support. So, no 386 support would mean no 32 bit support rather than the 386 specifically.


I have seen i386 being used in a couple of contexts. For example, Debian uses it to refer to the 32-bit port[1]. It also seemed to be used when various Linux distributions built Pentium optimized packages when that became a thing. In that case 386 would be used for earlier 32-bit processors and 686 would be used for later 32-bit processors. So the nomenclature is not always clear.

[1] https://www.debian.org/releases/stable/i386/


No, that's x86.

They dropped 386 support, but a 486 still works.


Would it be feasible to just build a Debian image for the i486? Few packages should depend on something newer.


Since the modern Linux kernel doesn't make any promises of 486 support there's very likely plenty of places inline assembly uses instructions unavailable on a 486.

So even if you compiled it with strict -march i486 you'd end up with unsupported instructions. Modern GCC also doesn't like emitting strict i386 or i486 code anymore so you'd be likely to end up with unsupported instructions even without any inline assembly.


I don't know offhand what instructions were introduced in Pentium or later.

Doing some googling... 486 had cmpxchg, that's good.

Seems it didn't have rdtsc. It's not out of the question, I might say even pretty common, for user mode code to use that one via inline asm to implement a timer.

Obviously no SIMD instructions. If they say they're supporting 686 or better they might have some compiler flags that depend on mmx or similar? SSE came late to P6.


Additionally, he keeps talking about dereferencing the pointer, which I don't think is right. The pointer never gets deferenced in the code shown.

I'm not an x86 guru, but I think that "movq ptr(%rip), %rsi" is different because ptr needs to be moved from relative to the instruction pointer (because it is on the stack, as a non-const variable).


It's a global variable, it's not on the stack. It's in the data section.

$arr is copying the address of "arr", so it must use an immediate move (possibly 64-bit). It could also use RIP-relative with the "lea" (load effective address) instruction.

$ptr(%rip) is accessing memory, so it uses RIP-relative addressing with the "mov" instruction.


> It's a global variable, it's not on the stack. It's in the data section.

And because it's a const, it's in the .rodata section... (read-only) Playing too liberally with the data often leads to a SEGV (as it should).


The code is more interesting if you -do- actually dereference the pointer, too. I've changed the prototype of "bogus" to take a single char as the first argument, and dereference the two pointers inside the do_arr/do_ptr functions:

https://goo.gl/jZYx0f

So the "do_arr" version "knows" what value is at * arr because "arr" is immutable, so the compiler can just choose to load a constant. The "do_ptr" version has to load from memory instead.

But what if we tell the compiler that the value of "ptr" won't change (i.e. "char * const" instead of "const char * ")? The code becomes the same:

https://goo.gl/iNaUHe

So basically "const char []" and "char * const" are more logically equivalent here.


This is a little off topic, but does anyone have any good resources to help me wrap my head around pointers in C? Right now they are very confusing to me.


Imagine all memory is a big array. A pointer is just an index into that array. A pointer dereference is like accessing something at an array index. A pointer to a pointer is array index to a location where you'll find another array index.


I understand pointers conceptually. It's the syntax (in C specifically) that is giving me trouble. I am fine with pointers in ASM.


Hmm, it's been a long time since I read it, but this sounds like something that good old K&R probably does well. It's famously short and well-written.

It's old-fashioned but I assume (maybe others can correct me) the latest edition is up to date enough that it won't teach you any outright bad habits.


Thank you very much! I'll give it a look.


You are right. This is RIP-relative addressing.


>Let's assume that when a river gets redirected, a scientist goes and investigates it

This is a faulty assumption and is what leads to the wrong conclusion. The probability of 0.5% is for a randomly selected river. That is, if you went and examined 200 randomly selected rivers, 1 of them (on average) would be redirected due to natural variability.

That does not imply that the remaining 199 were redirected due to global warming. It does not even imply that the remaining 199 were redirected at all!

What is needed is the percentage of rivers that have undergone this redirection. Here's a simplified example: If it's ~0.5%, you conclude it's just natural variation. If it's >0.5%, you conclude that something (possibly global warming) is increasing the number of rivers that are being redirected. If it's <0.5%, you conclude that something is decreasing the number of rivers that are being redirected.


From the paper:

"shows our estimate that there is only a 0.5% chance that the observed retreat of Kaskawulsh Glacier happened in the absence of a climate trend"

The 0.5% has nothing to do with the river. It is their confidence that the retreat of the glacier could occur in the absence of a climate trend based on their model.


Can you clarify why it is incorrect to reverse that into the statement "We estimate that there is a 99.5 percent chance that the observed retreat did not happen in the absence of a climate trend."? I confess to a fair amount of confusion at this point :) I'm sure there is something subtle (or perhaps obvious, and my brain is failing) that I'm missing.

What is the properly worded complement?


This seems similar, but not identical, to the statement:

If there is not a climate trend, we would expect this to happen with a .05 chance.

If they meant the latter, my confusion is resolved.


This is the blind leading the blind.

Fundamental truth: bayes theorem.

P(evidence | null hypothesis) = P(null hypothesis | evidence) * P (evidence) / P (null hypothesis)

The P-value test determines:

P(evidence | null hypothesis) = 0.5%

= there is a 0.5% chance of the observed evidence given the null hypothesis

The statement "We estimate that there is a 99.5 percent chance that the observed retreat did not happen in the absence of a climate trend."

translates to P(!null hypothesis | evidence) = 99.5%

By Bayes theorem:

P(!null hypothesis | evidence) = P(evidence | !null hypothesis) * P (!null hypothesis) / P (evidence)

We know almost none of these terms. The answer is not as simple as 99.5.


Oh I thought we were having an interesting discussion about the linguistic mapping between probability and regular English. Sorry for wasting your time. :(


> The probability of 0.5% is for a randomly selected river

Sorry, what is the probability .5% for exactly? The probability a river is redirected under global warming conditions? I didn't think that's what they computed. If it is, then my bad. :) I thought they had computed given that the river was redirected, what is the likelihood it happened due to global warming. Ah well, like I said, the details are the hard part. :)


I think a better analogy is that it's like having many pairs of dice, and rolling each pair in turn until you get a roll of 12. Then concluding "this particular pair of dice must be loaded".

Presumably, the researchers did not select rivers at random to study, they selected this river in particular because of the changes it is undergoing.


https://www.google.com/search?q=china+income+inequality

If that was the case, China's growth would be stagnating as well. Not everything is caused by income inequality.


China's growth is stagnate. That's why they needed tens of trillions in new debt since the great recession to juice the GDP numbers. Subtract out the $30 trillion in new debt, throw in the real inflation rate (like many governments, China lies about their real inflation rate), you get economic contraction. That means the only difference between their fake growth today, and the horrific outcome on the way, is a little bit of time and tens of trillions in additional debt that will never be repaid.


Not necessarily. 70% of the U.S. economy is based on consumer spending, meaning the disposable income has to be there in the first place. China's economy is not structured this way.


Where 1 <= x <= 4:

f(x) = (x-1)/9 + 1, maps bijectively from [1,4] to [1,4/3]

g(x) = (x-1)/9 + 4/3, maps bijectively from [1,4] to [4/3,5/3]

h(x) = (x-1)/9 + 5/3 maps bijectively from [1,4] to [5/3,2]

Therefore, [1,2] must contain three times as many numbers as [1,4], right?

It doesn't work like that.


I'm not sure what you are doing with the above.


> The argument "Addition breaks" proves just as well that zero "isn't a number", since it breaks division rather badly.

Mathematically, numbers (be it natural, rational, real, or complex) are defined as a field. Fields (or, more accurately, rings, which all fields are) are defined by addition and multiplication, not both. [1]

[1] https://en.wikipedia.org/wiki/Ring_(mathematics)


Sometimes you may use a "number", but you just want ordering properties from it.

You may not want addition and multiplication in such a "number".

This may be the case in numbers used for ranking outcomes, or counting, or optimization. Using infinity in this context does not cause problems.


Not only does there exist a bijective mapping between [1,2] and [1,4], there exists infinite different bijective mappings between a subset of [1,2] and [1,4].

i.e.: One could map bijectively from [1,1.5] to [1,4] and map bijectively from [1.5,2] to [1,4] (1)

To talk about there being "twice as much" in one uncountable infinity than in another uncountable infinity is nonsense, since you can't apply words like "twice", since the infinities can't be counted.

(1) https://imgur.com/NkKEI


  int a = f();
  if (a < 0) a = 0;
   has been replaced with
  const int a = MAX(0,f());
I believe the latter will generally result in 2 calls to f() (unless it returns less than zero), which could cause undesired behavior or just be inefficient. Of course, it depends on what f() does.

Assuming MAX(0,f()) expands to something like this:

  if( f() < 0 )
     a = 0
  else
     a = f()


It was a poor choice on the part of the author to use the macro-looking MAX() in his example. There is in fact a standard max() function in the <algorithm> header that does not suffer from silly macro-expansion problems:

    template<class T>
    const T& max(const T& a, const T& b) {
        return a < b ? b : a;
    }

    template<class T, class Predicate>
    const T& max(const T& a, const T& b, Predicate p) {
        return p(a, b) ? b : a;
    }
And your definition of MAX() is wrong, not just because of multiple evaluation of the arguments. The “conventionally wrong” MAX() would be this:

    #define MAX(a, b) ((a) < (b) ? (b) : (a))


The problem is a loss of verbosity. The next person looking at the code isn't necessarily going to know how MAX is defined, and saving 1 line of code isn't worth making it less clear.

That said, for me, your code would be clear, as not being in all caps, I would assume it's a function and therefore doesn't have the same weakness.

As someone else pointed out, there's no way you can look at MAX(0,f()) and know that it's enforced that f() is only used once within the macro.


If MAX evaluates f() twice, then it is broken. It's trivial to write a version that only evaluates its arguments once in C++ using templates, and in C using common extensions that allow for macros with multiple statements that act as an expression.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You