Anyone know what in particular the 486SX had that the 386 didn’t to make keeping only the former alive practical? Unfortunately the kernel mailing list link in the zdnet article appears to be rotted away.
I believe 386 did not have certain "atomic" instructions like CMPXCHG or XADD. So in order to support 386, the kernel had to have special versions of all locking primitives just for 386.
I was always under the impression that Linux 386 simply meant 32 bit support. So, no 386 support would mean no 32 bit support rather than the 386 specifically.
I have seen i386 being used in a couple of contexts. For example, Debian uses it to refer to the 32-bit port[1]. It also seemed to be used when various Linux distributions built Pentium optimized packages when that became a thing. In that case 386 would be used for earlier 32-bit processors and 686 would be used for later 32-bit processors. So the nomenclature is not always clear.
Since the modern Linux kernel doesn't make any promises of 486 support there's very likely plenty of places inline assembly uses instructions unavailable on a 486.
So even if you compiled it with strict -march i486 you'd end up with unsupported instructions. Modern GCC also doesn't like emitting strict i386 or i486 code anymore so you'd be likely to end up with unsupported instructions even without any inline assembly.
I don't know offhand what instructions were introduced in Pentium or later.
Doing some googling... 486 had cmpxchg, that's good.
Seems it didn't have rdtsc. It's not out of the question, I might say even pretty common, for user mode code to use that one via inline asm to implement a timer.
Obviously no SIMD instructions. If they say they're supporting 686 or better they might have some compiler flags that depend on mmx or similar? SSE came late to P6.
Additionally, he keeps talking about dereferencing the pointer, which I don't think is right. The pointer never gets deferenced in the code shown.
I'm not an x86 guru, but I think that "movq ptr(%rip), %rsi" is different because ptr needs to be moved from relative to the instruction pointer (because it is on the stack, as a non-const variable).
It's a global variable, it's not on the stack. It's in the data section.
$arr is copying the address of "arr", so it must use an immediate move (possibly 64-bit). It could also use RIP-relative with the "lea" (load effective address) instruction.
$ptr(%rip) is accessing memory, so it uses RIP-relative addressing with the "mov" instruction.
The code is more interesting if you -do- actually dereference the pointer, too. I've changed the prototype of "bogus" to take a single char as the first argument, and dereference the two pointers inside the do_arr/do_ptr functions:
So the "do_arr" version "knows" what value is at * arr because "arr" is immutable, so the compiler can just choose to load a constant. The "do_ptr" version has to load from memory instead.
But what if we tell the compiler that the value of "ptr" won't change (i.e. "char * const" instead of "const char * ")? The code becomes the same:
This is a little off topic, but does anyone have any good resources to help me wrap my head around pointers in C? Right now they are very confusing to me.
Imagine all memory is a big array. A pointer is just an index into that array. A pointer dereference is like accessing something at an array index. A pointer to a pointer is array index to a location where you'll find another array index.
>Let's assume that when a river gets redirected, a scientist goes and investigates it
This is a faulty assumption and is what leads to the wrong conclusion. The probability of 0.5% is for a randomly selected river. That is, if you went and examined 200 randomly selected rivers, 1 of them (on average) would be redirected due to natural variability.
That does not imply that the remaining 199 were redirected due to global warming. It does not even imply that the remaining 199 were redirected at all!
What is needed is the percentage of rivers that have undergone this redirection. Here's a simplified example: If it's ~0.5%, you conclude it's just natural variation. If it's >0.5%, you conclude that something (possibly global warming) is increasing the number of rivers that are being redirected. If it's <0.5%, you conclude that something is decreasing the number of rivers that are being redirected.
"shows our estimate that there is only a 0.5% chance that the observed retreat of Kaskawulsh Glacier happened in the absence of a climate trend"
The 0.5% has nothing to do with the river. It is their confidence that the retreat of the glacier could occur in the absence of a climate trend based on their model.
Can you clarify why it is incorrect to reverse that into the statement "We estimate that there is a 99.5 percent chance that the observed retreat did not happen in the absence of a climate trend."? I confess to a fair amount of confusion at this point :) I'm sure there is something subtle (or perhaps obvious, and my brain is failing) that I'm missing.
Oh I thought we were having an interesting discussion about the linguistic mapping between probability and regular English. Sorry for wasting your time. :(
> The probability of 0.5% is for a randomly selected river
Sorry, what is the probability .5% for exactly? The probability a river is redirected under global warming conditions? I didn't think that's what they computed. If it is, then my bad. :) I thought they had computed given that the river was redirected, what is the likelihood it happened due to global warming. Ah well, like I said, the details are the hard part. :)
I think a better analogy is that it's like having many pairs of dice, and rolling each pair in turn until you get a roll of 12. Then concluding "this particular pair of dice must be loaded".
Presumably, the researchers did not select rivers at random to study, they selected this river in particular because of the changes it is undergoing.
China's growth is stagnate. That's why they needed tens of trillions in new debt since the great recession to juice the GDP numbers. Subtract out the $30 trillion in new debt, throw in the real inflation rate (like many governments, China lies about their real inflation rate), you get economic contraction. That means the only difference between their fake growth today, and the horrific outcome on the way, is a little bit of time and tens of trillions in additional debt that will never be repaid.
Not necessarily. 70% of the U.S. economy is based on consumer spending, meaning the disposable income has to be there in the first place. China's economy is not structured this way.
> The argument "Addition breaks" proves just as well that zero "isn't a number", since it breaks division rather badly.
Mathematically, numbers (be it natural, rational, real, or complex) are defined as a field. Fields (or, more accurately, rings, which all fields are) are defined by addition and multiplication, not both. [1]
Not only does there exist a bijective mapping between [1,2] and [1,4], there exists infinite different bijective mappings between a subset of [1,2] and [1,4].
i.e.: One could map bijectively from [1,1.5] to [1,4] and map bijectively from [1.5,2] to [1,4] (1)
To talk about there being "twice as much" in one uncountable infinity than in another uncountable infinity is nonsense, since you can't apply words like "twice", since the infinities can't be counted.
int a = f();
if (a < 0) a = 0;
has been replaced with
const int a = MAX(0,f());
I believe the latter will generally result in 2 calls to f() (unless it returns less than zero), which could cause undesired behavior or just be inefficient. Of course, it depends on what f() does.
Assuming MAX(0,f()) expands to something like this:
It was a poor choice on the part of the author to use the macro-looking MAX() in his example. There is in fact a standard max() function in the <algorithm> header that does not suffer from silly macro-expansion problems:
template<class T>
const T& max(const T& a, const T& b) {
return a < b ? b : a;
}
template<class T, class Predicate>
const T& max(const T& a, const T& b, Predicate p) {
return p(a, b) ? b : a;
}
And your definition of MAX() is wrong, not just because of multiple evaluation of the arguments. The “conventionally wrong” MAX() would be this:
The problem is a loss of verbosity. The next person looking at the code isn't necessarily going to know how MAX is defined, and saving 1 line of code isn't worth making it less clear.
That said, for me, your code would be clear, as not being in all caps, I would assume it's a function and therefore doesn't have the same weakness.
As someone else pointed out, there's no way you can look at MAX(0,f()) and know that it's enforced that f() is only used once within the macro.
If MAX evaluates f() twice, then it is broken. It's trivial to write a version that only evaluates its arguments once in C++ using templates, and in C using common extensions that allow for macros with multiple statements that act as an expression.
https://www.debian.org/releases/stable/i386/ch02s01.en.html#...