Why is that NaN handling sensible? I don't think it makes sense to say log(-1) equals log(-2). Mathematically it isn't true and your implementation would say it's true only because of limitations in IEEE754.
Lots of bad advice. Using unsigned for normal integers when you know they will be positive does worse for optimization, not better. Also for (;;) {} is convention because older compilers would give warnings with while (1)
Tried it and realized it was gimped compared to the Linux tools it was trying to emulate. Monopolies will always be playing catchup with basic functionalities people have done for free because they make sense.
printing is never the appropriate tool. You can make your debugger print something when that line of code is reached anyway and automatically continue if you want. So what's the point of pritntf? It's just less information and features.
Let me enumerate. Printf survives debugger restarts, shows up in git diff, usually messes less with the timing, can be exchanged with coworkers or deployed to users and be toggled with logging rules, has the full power of the programming language, the output is easier to run "diff" on to compare runs, works in CI containers, has no problems with mixed language environments...
As far as I'm concerned, breakpoints and backtraces, especially of crashes, are the superpower of debuggers. Where they are not immediately applicable, I usually don't bother.
Print debugging optimizes for timing detail and fidelity.
Debuggers optimize for data detail, at the expense of timing detail and fidelity.
In my opinion - timing & order of events is actually more meaningful.
I often don't need the extra data detail, and the debugger is going to change timing in meaningful ways.
Both are skills - both have their place. Know when logs are useful, know when debuggers are useful. Don't get stuck in the fallacy of "The one true way!" - that's the only thing I can promise is bullshit.
If you've never debugged a problem that goes away when a debugger is attached - you're woefully unprepared to have this conversation (It's much rarer, but I can even talk about times where adding a single print line was enough to change timing and make the issue harder to reproduce).
At the very least - with prints you get a tangible record of the order of operations during a reproduction. It doesn't go away, I don't have to remember how many times I've hit continue, I don't have to worry about whether pausing at this breakpoint is changing the world (and it is, because other things keep chugging - the network doesn't give a fuck that you're paused at a break point).
I’m a huge advocate for using debuggers, but saying never print is too dogmatic, and sometimes incorrect. There are plenty of environments where a debugger is not available or very difficult to setup - GPU shaders historically, embedded environments on small/custom hardware, experimental languages, etc.. Printing is both very easy, and often good enough. You should probably reach for a debugger if you keep adding prints and recompiling or if you don’t fix your bug in a couple of minutes. But aside from that, print debugging is useful and has its place, even on occasions when a good debugger is available. Never say never.
I haven't touched GPU programming in ... uh ... decades, but is print debugging readily available for shaders? That was surprising, but glad to hear it! :)
Good point, it’s getting better, but often print statement debugging is not available in shaders either, and you have to resort to the visual form of print debugging: outputting a tinted color. Crude, but often enough it’s plenty effective. Personally, I mentally put shader tint debugging in the same category as CPU print debugging.
I'm firmly in the "use a debugger" camp, but printf is sometimes indispensable when attempting to debug race conditions. By confirming timing invariants hold with printf, you can usually narrow in fairly quickly on the problem. Doing the same in a debugger is much more of a hassle. It's not impossible, certainly, but it's way more of a pain in the ass.
Not this article again. His opinions on include files don't make sense anymore. Modern compilers keep track of what includes are necessary to reprocess.
What the hell do the antitrust people in the US do? Google should have been chopped to bits a decade ago and Microsoft buying Github is just nonsense. Way too much potential for abuse all around.
Nothing apparently. We've stopped caring. If it's not about getting rich right now in this lifetime then it's not worth doing. I'm also convinced governments have realised monopolies are good for them. You don't need a big government if you control the few massive corporations everyone has to use.
You're giving the government too much credit. They're not even that competent at malice. It's the large companies that control it by lobbying, not the other way around.
But how does raising awareness help anything in this case? In the current political climate, companies are likely more afraid of presidential punishment for not supporting Israel than of any public disapproval for supporting Israel.
reply