For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more futharkshill's commentsregister

> - Why A.B()? Period gets read as a sentence terminator and that function is being called with no parameters, which makes no sense.

What C syntax is this referring to?


You can do that, with a structure and a function pointer.

  struct a {
    void (*B)();
  }

  struct a A = { .B = ... };
  A.B();


Great example, especially considering op found that he added that example as a mistake :D


I severely overloaded a reference to C-family syntax and should have provided multiple examples instead. I erred, by trying to avoid C, C++, Java etc. specific comparisons.

What I should have written:

- Why is "Foo()" a function is being called with no parameters, which makes no sense.

- What is "A.B" Period gets read as a sentence terminator. I learned C++ and Java very early on in life and had never considered A.B to be an issue.


I think the only "hard" part about the syntax of C are pointers. Reading e.g. i++ *a->b[0]


"monkey patch" is frequently used by Rubyists without anny derogatory meaning.


True, but it was taken on by the ruby community as a badge of pride after being used as a slur by the aforementioned other community, IIRC.


I first heard the term in reference to how mootools messed with prototypes of JavaScript built-ins; at that time, I had no experience with either python or ruby.

Whatever context the term may have originated with, it far outgrew those origins. I definitely don't instinctively think of python programmers snubbing ruby when I hear the term.


C and C++ are completely different languages. The arbitrary parsing of data into structs in C is what makes it incredibly suitable for file parsing. Among other things.


Yeah. Someone gave me a file and told me to parse it. I used structs and fread. It was easy-peasy! I do not know of any other languages where what I wanted to do would have been just as easy as it was in C.


It is almost as if it was written with people doing a lot of IO and internet :D


Sockets are a later addition, but definitely wrt IO.


C is a subset of C++. The key point is that both languages suffer from the same problems, e.g. memory safety issues, undefined behavior, data races.

The software industry has been dealing with the fallout of these issues for the past three decades. It's about time we moved the bar higher.

By the way, the main data structure of my first large Rust program was simply a vector of a custom struct, optimized for memory footprint. Just like in C, with the same low overhead serialization.

With C, you always have to be careful to not overrun a buffer. The recent NSS vulnerability is a great example of this:

https://googleprojectzero.blogspot.com/2021/12/this-shouldnt...

Even though this code was heavily fuzzed and tested, the problem slipped through. Not possible in Rust. Also, with Rust you can throw away all your sanitizer infrastructure and save all the associated costs.


> C is a subset of C++

It is not, they diverged since decades now.

Still, even if this was the case, the programming styles are so different that there’s nothing to gain considering them as any more similar as any two other languages.


> It is not, they diverged since decades now.

AFAIK you always had to cast `void *` in C++ (like the return value of `malloc`).


Why on Earth would one use malloc in C++? Instead of using ’new’ if explicit allocation is desired.


If you need to change the buffer size afterwards.

My understanding is that most implementations of new call malloc under the hood (this may or may not be outdated at this point, I haven't kept up with C++ implementation) and both of these systems introduce a layer of record keeping, so if you're in an extremely memory constrained environment, you may want to use malloc directly.

If you want your code to be noexcept, you need to call malloc and handle the case where it returns null as new can throw (this is UB in theory, but in practice I'm pretty sure everything just aborts) to strip out all the stack-unwinding code.

If you want to avoid the constructor call (for whatever reason).


We are talking about the question if C is a subset of C++. `new` certainly isn't a part of C, so also not an element of the intersection of C and C++.

Idiomatic C code doesn't explicitly cast the return value of malloc

    foo *bar = malloc(sizeof *bar);
C++ did AFAIK never (certainly not with C++98, the question is if it had been allowed sometime before the standardization, but I think it never did) allow this, so you always had to do

    foo *bar = (foo *)malloc(sizeof *bar);
Therefore, C is not a subset of C++. But a (non-empty ;) intersection of C and C++ exists.


> Why on Earth would one use malloc in C++? Instead of using ’new’ if explicit allocation is desired.

That was the point of my original comment that C++ and C are not the same language :)


You are being overly dramatic. A hundred years ago, workers were treated so bad and had such bad living conditions it makes you sick just to hear about it. THAT lead to some violent revolutions in some countries, most of them young, undemocratic, or unstable. Some of them migrated 2,000 km from their home.

People have food, shelter, entertainment.


its not only about food, shelter and entertainment, its about perspective. if you have to slave away for another 40 years, living paycheck to paycheck, can barely afford a family, a revolution in whatever form it comes might sound good. Check out the Strike at Kellogs, 80h/week, 16h Shifts for barely any money and the Company just wants to fire the striking employees. You can only press so much productivity out of people until they push back and i think we can see that this pushback is starting...


> if you have to slave away

Yeah this is the reality disconnect, if you call modern work slaving away you're probably not the kind of person that's going to join a violent revolt, it's easy to write that shit on forums.


If you call "80h weeks at Kellogs" not slaving away i dont know what is...

If you call "Amazon workers not being allowed to leave or shelter when there is a tornado warning" not slaving away i dont know what is...

What about all those "Gig Economy" Workers that get paid the bare minimum without benefits?

You might not see it from your cozy office because this development hasn't impacted you yet ... but it will sooner or later.


I think the thrust of the issue is having to pay to exist. Doing otherwise is criminal.

It's not a light switch, but we can try paving the roads at the bare minimum so that the societal wagon train can transition.

I do what I can, and you might do as well, but there needs a fundamental ideological shift in how we perceive those cornerstone workers.


Plenty of stories on /r/antiwork make me feel sick too


I wonder how they would react to posting this link over there:

https://news.ycombinator.com/item?id=29581125


You would be surprised at the skills at the highest level of academia.


alternative: In 2021 we still use C et al. for our backend server, and we get hacked every single day. If I am going to leave a wide open door to my house, I at least want confidence that the house is not made out of cardboard


That’s disingenuous. There are languages like php or JavaScript that are much much faster than Python and that don’t require you to give up the keys to your house.


Is it any more of an exaggeration than your post?

Also pypy is fast, and the speed of php also heavily depends on version. Not that backend speed even makes a difference that much of the time. 3ms vs 8ms won't matter.


I cannot overstate the importance of using a programming language targeting GPUs directly like Futhark (https://github.com/diku-dk/futhark). In this case, it is a functional, declarative language where you can focus on the why, not the how. Just like CPUs are incredibly complex, higher level abstractions are very important.

If you were a pro GPU programmer and had 10 years, Futhark would be maybe 10x slower. But just like we do not program in assembly when making critically fast software, most non-simple things are easier written in this.



Well, yes, but to be honest that code still has to be annotated with bounds and batch sizes etc. In futhark you need to know absolutely zero about GPUs


Yes, but that's only required for one of those packages.

One of Julia's benefits is that the compiler is hackable so high level abstractions can be experimented with in user space.


Interesting.

How fast can Futhark be compared to a standard CUDA loop with a few arithmetic, load and save operations? Basically, suppose you're doing simple gathering and scattering?


I think it will always be slower than hand-optimized GPU code, just like assembly. But for most complex programs, I think the compiler is better than humans.

@arthas, the author, at some point made comparisons against implementations and it was a most twice as slow, but often faster.


If one is writing CUDA code manually, isn't said one also trying optimize the code for performance? Otherwise, it'd just be ran on a CPU.


If you are writing some very important function, you may write it in assembly and it will be faster than e.g. a C implementation (CPU). But how often do you do that? I think of e.g. CUDA as assembly for the GPU as you have to know about batch size, and special operations and annotations, but Futhark is like writing C or Java for the GPU (it actually compiles to CPUs as well), and it is just a much nicer experience, and I think 99.9% of all people will write faster code because GPUs are simply so complex


> Higher Level abstractions

like here: https://www.youtube.com/watch?v=x_I6Bu7IDJo ?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You