I severely overloaded a reference to C-family syntax and should have provided multiple examples instead. I erred, by trying to avoid C, C++, Java etc. specific comparisons.
What I should have written:
- Why is "Foo()" a function is being called with no parameters, which makes no sense.
- What is "A.B" Period gets read as a sentence terminator. I learned C++ and Java very early on in life and had never considered A.B to be an issue.
I first heard the term in reference to how mootools messed with prototypes of JavaScript built-ins; at that time, I had no experience with either python or ruby.
Whatever context the term may have originated with, it far outgrew those origins. I definitely don't instinctively think of python programmers snubbing ruby when I hear the term.
C and C++ are completely different languages. The arbitrary parsing of data into structs in C is what makes it incredibly suitable for file parsing. Among other things.
Yeah. Someone gave me a file and told me to parse it. I used structs and fread. It was easy-peasy! I do not know of any other languages where what I wanted to do would have been just as easy as it was in C.
C is a subset of C++. The key point is that both languages suffer from the same problems, e.g. memory safety issues, undefined behavior, data races.
The software industry has been dealing with the fallout of these issues for the past three decades. It's about time we moved the bar higher.
By the way, the main data structure of my first large Rust program was simply a vector of a custom struct, optimized for memory footprint. Just like in C, with the same low overhead serialization.
With C, you always have to be careful to not overrun a buffer. The recent NSS vulnerability is a great example of this:
Even though this code was heavily fuzzed and tested, the problem slipped through. Not possible in Rust. Also, with Rust you can throw away all your sanitizer infrastructure and save all the associated costs.
Still, even if this was the case, the programming styles are so different that there’s nothing to gain considering them as any more similar as any two other languages.
My understanding is that most implementations of new call malloc under the hood (this may or may not be outdated at this point, I haven't kept up with C++ implementation) and both of these systems introduce a layer of record keeping, so if you're in an extremely memory constrained environment, you may want to use malloc directly.
If you want your code to be noexcept, you need to call malloc and handle the case where it returns null as new can throw (this is UB in theory, but in practice I'm pretty sure everything just aborts) to strip out all the stack-unwinding code.
If you want to avoid the constructor call (for whatever reason).
We are talking about the question if C is a subset of C++.
`new` certainly isn't a part of C, so also not an element of the intersection of C and C++.
Idiomatic C code doesn't explicitly cast the return value of malloc
foo *bar = malloc(sizeof *bar);
C++ did AFAIK never (certainly not with C++98, the question is if it had been allowed sometime before the standardization, but I think it never did) allow this, so you always had to do
foo *bar = (foo *)malloc(sizeof *bar);
Therefore, C is not a subset of C++. But a (non-empty ;) intersection of C and C++ exists.
You are being overly dramatic. A hundred years ago, workers were treated so bad and had such bad living conditions it makes you sick just to hear about it. THAT lead to some violent revolutions in some countries, most of them young, undemocratic, or unstable. Some of them migrated 2,000 km from their home.
its not only about food, shelter and entertainment, its about perspective. if you have to slave away for another 40 years, living paycheck to paycheck, can barely afford a family, a revolution in whatever form it comes might sound good. Check out the Strike at Kellogs, 80h/week, 16h Shifts for barely any money and the Company just wants to fire the striking employees. You can only press so much productivity out of people until they push back and i think we can see that this pushback is starting...
Yeah this is the reality disconnect, if you call modern work slaving away you're probably not the kind of person that's going to join a violent revolt, it's easy to write that shit on forums.
alternative: In 2021 we still use C et al. for our backend server, and we get hacked every single day. If I am going to leave a wide open door to my house, I at least want confidence that the house is not made out of cardboard
That’s disingenuous. There are languages like php or JavaScript that are much much faster than Python and that don’t require you to give up the keys to your house.
Also pypy is fast, and the speed of php also heavily depends on version. Not that backend speed even makes a difference that much of the time. 3ms vs 8ms won't matter.
I cannot overstate the importance of using a programming language targeting GPUs directly like Futhark (https://github.com/diku-dk/futhark). In this case, it is a functional, declarative language where you can focus on the why, not the how. Just like CPUs are incredibly complex, higher level abstractions are very important.
If you were a pro GPU programmer and had 10 years, Futhark would be maybe 10x slower. But just like we do not program in assembly when making critically fast software, most non-simple things are easier written in this.
Well, yes, but to be honest that code still has to be annotated with bounds and batch sizes etc. In futhark you need to know absolutely zero about GPUs
How fast can Futhark be compared to a standard CUDA loop with a few arithmetic, load and save operations? Basically, suppose you're doing simple gathering and scattering?
I think it will always be slower than hand-optimized GPU code, just like assembly. But for most complex programs, I think the compiler is better than humans.
@arthas, the author, at some point made comparisons against implementations and it was a most twice as slow, but often faster.
If you are writing some very important function, you may write it in assembly and it will be faster than e.g. a C implementation (CPU). But how often do you do that? I think of e.g. CUDA as assembly for the GPU as you have to know about batch size, and special operations and annotations, but Futhark is like writing C or Java for the GPU (it actually compiles to CPUs as well), and it is just a much nicer experience, and I think 99.9% of all people will write faster code because GPUs are simply so complex
What C syntax is this referring to?