Does your course not have exams or in-lab assignments? Should sort itself out. Honestly, I'm all for homework fading away as professors can't figure out how to prevent people from using AI. It used to be the case that certain kids could get away with not doing much because they were popular enough to get people to let them copy their assignments (at least for certain subjects). Eventually the system will realize they can't detect AI and everything has to be in-person.
Sure, this guy is likely to fail the course. The point is: he is already working in the field. I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
> I don't know his exact job, but if it involves programming, or even scripting, he is faking his way with AI, not understanding what he's doing. That is frightening.
That could be considered malpractice. I know our profession currently doesn't have professional standards, but it's just a side effect of it being very new and not yet solidified; it won't be long until some duty of care becomes required, and we're already starting to see some movement in that direction, with things like the EU CRA.
It is a chrome developer. His claims that he was raising the quality level of the web are particularly hilarious given that he worked at google. Maybe the salary of google blinds people into believing this.
His opinions on include files have fallen out of favor because compiling is faster and it adds needless work. Are there organizations that still do this? All the style guides I've seen do not.
I believe clang and gcc avoid reading in and re-processing include files that are already included, so his advice is unnecessary and creates a lot of maintenance burden, especially for C++ where a lot more code is in header files. It may still be useful for old compilers, though.
They recognize include guards and skip any further inclusions for those cases. There are scenarios where you may want multiple inclusion and you can still have that.
auto is a historic artifact for porting code from the B language to C, when everything was implicitly int but int did not exist yet. It had absolutely no use afterwards, which is why it was repurposed in C++ as well. In C23 this is done because it is very useful in combination with typeof() in macros, which is a far cry from SFINAE terrorism in C++
Why don't you post it yourself. And why should I care about what he says when telegram has some of the worst default encryption settings among commonly used messaging apps in the west?
Except Telegram is considered one of the most secured apps around. Obviously it cannot stop people from being stupid when they expose themselves.
The very reason why France is not happy is that because they cannot get access to private chats and stuff. EU was (and is) pushing for the end of E2E encryption after all (it failed this time, but they will try again).
Durov created Telegram because the russian government was trying to take over his original social network - VK (basically imagine USA gov taking over Facebook). Thus he sold his shared and left the country.
I do find it hilarious to see apologists of government over-reach like you.
What about group chat encryption? You can not possible say telegram is more secure than signal or WhatsApp.
What did I say that made me an apologist for government overreach? I recommend users use Signal? Your accusation is unfounded when I was complaining about a lack of encryption.
With Whatsapp it is pretty obvious at this point that it is in cahoot with governments in regards of backdoors and stuff. With Signal? Who knows? Maybe too.
Governments don't go after services that they can access freely.
somewhat unrelated but it's worth pointing out that noexcept is more specific for move semantics.
In fact most c++ developers believe that throwing an exception in a noexcept function is undefined behavior. It is not. The behavior is defined to call std::terminate. Which would lead one to ask how does it know to call that. Because noexcept functions have a hidden try catch to see if it should call it. The result is that noexcept can hurt performance, which is surprising behavior. C++ is just complicated.
I’m not well versed in C++’s exception system, but why can’t the unwind system itself call std::terminate? Why does it need to be the annotated method (that unwinding returns to)?
It doesn’t need to be, but the annotated function can still miss optimization opportunities, because it must be compiled as if it had a try-catch block if the compiler can’t prove the absence of exceptions, and this places constraints on reordering and inlining.
On the other hand, the guarantee given by noexcept can enable optimizations in the caller.
A try { } catch block that calls terminate has no overhead. Normally the constraints on reordering are because e.g. constructor/destructor semantics and other side effects need to be accurately preserved during unwinding, but here any exception is going to result on a call to terminate, and (auto) destructors are not going to run.
This was the entire point of noexcept versus the throw() specifier...
This is unfortunately not always true, even with a table-based unwinder. In order to detect the noexcept frame and call terminate(), the unwinder must be able to see the stack frame that has noexcept. This means that the compiler must suppress tail call optimization when a noexcept function tail calls a non-noexcept function.
Because the exception can’t be allowed to escape the function marked noexcept. No matter the actual implementation, the exception has to be effectively caught in that no except function.
I also find it difficult to conceive of a case where adding noexcept would lead to slower/longer code, other than arbitrary noexcept overloads such as TFA.
The article describes the performance implications of the hash tables storing hashes. That it decides to do so based on examining noexcept() of passed in types doesn't make noexcept a pessimization itself
And, as you can see on the sibling thread where I'm being downvoted, there is an actual pessimization: since a noexcept function requires an eh_frame, it will not be able to tail-call (except for noexcept functions).
-fno-exceptions doesn't get rid of exceptions, it just causes your program to abort when one is thrown, which sounds--kind of worse? How do you deal with (for example) a constructor that fails? And if you're using the Standard Library, it is very difficult to avoid code that might throw an exception.
> How do you deal with (for example) a constructor that fails?
The usual alternative is to have a factory function that returns std::optional<T> or std::expected<T, Err>. This avoids two-stage init, but has other tradeoffs.
Works rather well in rust but in c++ its not as nice for one because of the lack of something like rusts question mark operator. (Tbf thats kinda workaroundable with compiler extensions but MSVC for example doesn‘t have most of them)
In other words, you introduce an invalid state to every object and make constructing objects a lot more cumbersome. The first is the exact opposite of the (imo highly desirable) "make invalid states unrepresentable" principle, and the second is also a pretty extreme cost to productivity. I wouldn't say this is never worth it, but it's a very high price to pay.
A better solution than what rmholt said is to have a static method that returns std::optional<T>. This way if T exists, it's valid.
Later you get into another whole debate about movable types and what T should be once it's moved from (for example if you want to turn that std::optional<T> into a std::shared_ptr<T>), if it only has non-trivial constructors. An idea that just popped into my head would be to have a T constructor from std:optional<T>&&, which moves from it and resets the optional. But then it's not a real move constructor. With move constructors, a lot of times you just have to accept they'll have to leave the other object in some sort of empty or invalid state.
I'm not saying it's perfect but it's better than dealing with C++ exceptions. At least with error codes you can manually do some printing to find out what went wrong. With C++ exceptions you don't even get a line number or a stack trace.
You do get invalid states with exceptions in much worse way. Think exception thrown from a constructor, or, even better, from a constructor of one of the members even before the constructor's body is reached. Not to mention managing state becomes much more complicated when every statement can cause stack unwinding unexpectedly.
> (...) throwing an exception in a noexcept function is undefined behavior.
Small but critical correction: noexcept functions can throw exceptions. What they cannot do is allow exceptions to bubble up to function invokers. This means that it is trivial to define noexcept functions: just add a try-catch block that catches all exceptions.
Hmm... If you were reading the documentation for function foo() and it read "if the argument is negative, foo() throws an exception", would you understand that the function throws an exception and catches it internally before doing something else, or that it throws an exception that the caller must catch?
> If you were reading the documentation for function foo() and it read "if the argument is negative, foo() throws an exception" (...)
I think you didn't understood what I said.
In C++ functions declared as noexcept are expected to not allow exceptions to bubble up. If an exception bubbles up from one of these functions, the runtime calls std::terminate.
This does not mean you cannot throw and handle exceptions within such a function. You are free to handle any exception within a noexcept function. You can throw and catch as many exceptions you feel like it while executing it. You just can't let them bubble up from the scope of your function.
I think you're the one who didn't understand me. You made a correction about the meaning of the phrase "throwing an exception". The point of my question is that your correction is incorrect, because if you read the sentence "if the argument is negative, foo() throws an exception" you would indeed understand that an exception will unwind the stack out of the foo() call in such a situation. There's no difference between "foo() allows an exception to bubble up" and "foo() throws an exception"; both phrases describe the same situation.
I suppose you are technically correct that noexcept can throw to themselves. But that's just being pedantic, isn't it? From the observer/caller point of view the function won't ever throw. It will always return (or abort).
> I suppose you are technically correct that noexcept can throw to themselves. But that's just being pedantic, isn't it?
No. There are comments in this thread from people who are surprised that you can still handle exceptions within a noexcept function. Some seem to believe noexcept is supposed to mean "don't use exception within this scope". My comment is intended to clarify that, yes, you can throw and catch any exception from within a noexcept function, because noexcept does not mean "no exceptions within this scope" and instead only means "I should not allow exceptions to bubble up, and if I happen to do then just kill the app".
Yeah. Throwing from a noexcept function is often a better abort() than abort() itself because the std::terminate machinery will print information about whatever caused the termination, whereas abort will just SIGABRT.
> what.. noexcept throws exception..? what kind of infinite wisdom led to this
Not wisdom at all, just a very basic and predictable failsafe.
If a function that is declared to not throw any exception happens to throw one, the runtime handles that scenario as an eggregious violation of its contract. Consequently, as it does with any malformed code, it terminates the application.
There’s a reason both Go and Rust eschew exceptions. They’re something that superficially seemed like a great idea but that in practice complicate things by creating two exit paths for every function. They also don’t play nice with any form of async or dynamic programming.
C++ should never have had them, but we have sane clean C++ now. It’s called Rust.
IMO the pendulum swung too far with Rust. The experience is better than C++, but the template generics system is not very powerful. They ostensibly made up for this with macros, but (a) they're annoying to write and (b) they're really annoying to read. For these reasons Rust is much safer than C++ but has difficulty providing fluent interfaces like Eigen. There are libraries that try, but AFAICT none match Eigen's ability to eliminate unnecessary allocation for subexpressions and equivalently performing Rust code looks much more imperative.
Rust doesn't have a template system per se. C++'s templates are closer to C's macros. Rust has a typed generics system which does impose additional limits but also means everything is compile time checked in ways C++ isn't.
I agree that Rust's macros are annoying. I think it was a mistake to invent an entirely different language and syntax for it. Of course Rust also has procedural macros, which are macros written in Rust. IMHO that's how they should all work. Secondary languages explode cognitive load.
I'm not attached to the word "template", I just wanted to clarify that they're not Java-style generics with type erasure. If you'd like me to use "monomorphizing generics" instead I'm game :)
Even procedural macros are annoying, though. You need to make a separate crate for them. You still need to write fully-qualified everything, even standard functions, for hygiene reasons. Proc macros that actually do, erm, macro operations and produce a lot of code cause Rust's language server to grind to a halt in my experience. You're effectively writing another language that happens to share a lexer with Rust (what's the problem with that? Well, if I'd known that I'd need another language to solve my problem I might not have chosen Rust...).
For all its warts, using constrexpr if and concepts, is much more easier to do macro like programming than dealing with Rust's two worlds of macros and special syntax.
If static reflection does indeed land on C++26, this experience will be even better.
Rust panics are basically exceptions, aren’t they? Typically they aren’t caught without terminating. But you totally can. And if you’re writing a service that runs in the background you’ll probably have to.
Rust Result is basically a checked exception. Java makes you choose between adding an exception to "throws" or catching it, Rust makes you choose between declaring your function as returning Result or checking if you got an Err.
The only difference is that Rust has better syntactic sugar for the latter, but Result is really isomorphic to Java checked exceptions.
Panic could be said to be the same as an unchecked exception, except you have a lot more control on what causes them. The panic you get from calling unwrap() on an Option is the same as a NullPointerException, but you have full control on which points of the program that can generate it.
Rust goes to substantial lengths to allow unwinding from panics. For example, see how complicated `Vec::retain_mut` is. The complexity is due to the possibility of a panic, and the need to unwind.
I’ve never written any Java so your comparisons are lost on me.
Rust Result is great. I love it.
The root article was talking about C++. Rust panic is basically the same as a C++ exception afaict. With the caveat that Rust discourages catching and resuming from panics. But you can!
Catching panics is best-effort only. In general, Rust panics can't be caught. (Even if a program is compiled with panic=unwind, this can change to abort at run-time.)
If you find that exception-free code that is necessarily littered with exit value checks at every call, which discourages refactoring and adds massive noise, then you can call the decisions to eschew exceptions as “sane” and “clean”, but I find the resulting code to be neither. Or practically speaking, exit codes will often not be checked at all, or an error check will be omitted by mistake, thereby interrupting the entire intended error handling chain. Somehow that qualifies as a better outcome or more sane? Perhaps it is for a code base that is not changing (write-once) or is not expected to change (i.e. throwaway code written in “true microservice” style).
It is better. That doesn't make it perfect but it's better.
This is to be expected, in fact Rust has to be a lot better to even make a showing, because C is the "default" in some sense, you can't just be similarly good, you have to be significantly better for people to even notice.
I expect that long before 2050 there will be other, even better languages, which learn from not only the mistakes Rust learned from, but the mistakes in Rust, and in other languages from this period.
Take Editions. C was never able to figure out a way to add keywords. Simple idea, but it couldn't be done. They had to be kludged as magic with an underscore prefix to take advantage of an existing requirement in the language design, in C++ they decided to take the compatibility hit and invalidate all code using the to-be-reserved words. But in Rust they were able to add several new keywords, no trouble at all, because they'd thought about this and designed the language accordingly. That's what Editions did for them. You can expect future innovation along that dimension in future languages.
It's categorically better because it's memory-safe. We just had another RCE bug in the Windows TCP/IP stack and it's 2024. This should not be happening.
Many C++ features are useless outside of writing libraries, but your typical developer is going to be forced to understand how they work at some point. The result is just a burden.
Half of their games didn't amount to much more than child gambling. I guess the saving grace is that people weren't there very often. Still I'd be curious to know if there is any sort of research as to whether childhood gambling transfers to adulthood.
Haven't been to Chuck E Cheese's in a while, but I wonder if they've toned some of that down since moving from tokens to (optionally "all you can play" time-based) cards.
Back in my day, the slot machines and coin pusher games served as a lesson against gambling, since they ate all your tokens really quickly and you almost never won anything significant.
I haven’t been to a Chuck E Cheese in at least 15 years BUT I have been to other “arcades”, including Main Event, and the trend for those has been make 90% of the games child gambling to maximize profit. It’s boring for my young children who don’t fully grasp the ticket concept and aren’t entertained by winning points. I remember the games of my childhood being much more about entertainment than they are now.
A better metaphor than gambling is carnival games, the prize return on investment in arcade gaming is systemically low. I've worked at several game companies and never known a single gamer in my life who's been interested in gambling.
The sports community is obviously what's being taken over by gambling. Sports are being so thoroughly defined by and ruined by it that it's weird to think of gambling as a big deal in other contexts. If you try to apply the gambling moral panic to video games it doesn't work because sports so conspicuously shows what it looks like when the slippery slope is real and not just hypothetical.
Trying to see the effects of gambling anywhere but with sports right now is like trying to see the stars when the sun is out.
I don't think reddit let's you do this in anymore than a superficial way. I think reddit keeps the old edits internally so it won't harm the LLM. There were reports after the last protest of reddit reverting mass edits.
I mean, it's the only meaningful way of punishing the company/site. It becomes unusable, people don't contribute anything further because it stops being a hub/valuable site, and eventually their costs outweigh their benefits. That last bit is probably a pipe dream, but saying "Oh well, might as well still enrich them with my knowledge/contributions" doesn't seem like an alternative.