I don't think it's a myth so much as a misunderstanding of terminology. If an implementation defines some undefined behavior from the standard, it stops being undefined behavior at that point (for that implementation) and is no longer something you need to avoid except for portability concerns.
You're exactly right that this is why there is a distinction between conforming and strictly conforming code.
The problem is that under modern interpretation, even if some parts of the Standard and a platform's documentation would define the behavior of some action, the fact that some part of the Standard would regards an overlapping category of constructs as invoking UB overrides everything else.
I could imagine misguided readings of some coding standard advice that would lead to that interpretation, but it's still not an interpretation that makes sense to me.
Implementations define undefined behavior all the time and users rely on it. For instance, POSIX defines that you can convert an object pointer into a function pointer (for dlsym to work), or implementations often rely on offsets from a null pointer for their 'offsetof' macro implementation.
Such an interpretation would be the only way to justify the way the maintainers of clang and gcc actually behave in response to complaints about their compilers' "optimizations".
> 1. Will the Apple's Blocks extension, which allows creation of Closures and Lambda functions, be included in C2X?
We haven't seen a proposal to add them to C2x, yet. However, there has been some interest within the committee regarding the idea, so I think such a proposal could have some support.
> 2. Are there any plans to improve the _Generic interface (to make it easy to switch on multiple arguements, etc.)?
I haven't seen any such plans, but there is some awareness that _Generic can be hard to use, especially as you try to compose generic operations together.
1. The reason I asked was because I remember reading the proposal as N2030[1] and N1451[2] a while back. Were these never actually presented for voting? (not sure how the commitee works)
Ah! No, those just predate my joining the committee and haven't really come up since they were presented.
Basically, every paper that gets submitted by an author will get some amount of discussion time at the next available meeting as well as feedback from the committee on the proposal. I'm not certain what feedback these papers received (we could look through the meeting minutes for the meetings the papers were discussed at to find out, though).
If that macro is defined, then wchar_t is able to represent every character from the Unicode required character set with the same value as the short code for that character. Which version of Unicode is supported is determined by the date value the macro expands to.
Clang defines that macro for some targets (like the Cloud ABI target), but not others. I'm not certain why the macro is not defined for macOS though (it might be worth a bug report to LLVM, as this could be a simple oversight).
Would the following be a correct way to determine whether there's a problem?
* First call setlocale(LC_CTYPE, "en_US.UTF-8")
* Next feed the UTF-8 string representation of every Unicode codepoint one at a time to mbstowcs() and ensure that the output for each is a wchar_t string of length one
* If all input codepoints numerically match the output wchar_t UTF-32 code units, then the implementation is officially good, and should define __STDC_ISO_10646__?
I think this is correct, assuming that locale is supported by the implementation and wchar_t is wide enough, but I am by no means an expert on character encodings.
When the committee considers proposals, we do consider the implementation burden of the proposal as part of the feature. If parts of the proposal would be an undue burden for an implementation, the committee may request modifications to the proposal, or justification as to why the burden is necessary.
Not off the top of my head, but as an example along similar lines, when talking about whether we could realistically specify twos complement integer representations for C2x, we had to determine whether this would require an implementation to emulate twos complement in order to continue to support C. Such emulation might have been too much of a burden for an implementation's users to bear for performance reasons and could have been a reason to not progress the proposal.
Not really -- vendors are free to ignore newer releases of the standard that do not meet their customers needs and the committee can't do much about it.
Well, yes. But putting the array argument(s) first is the more natural order, in my opinion. And it is surely odd that only one order is allowed in this context, when otherwise C is happy with changing the order of parameters to be whatever you like.
Plus, of course, there may be existing code using such functions, with parameters in the order that would become impossible if this syntax were disallowed.
If you're interested in the final TR, I would imagine we'd list it on that page you linked. If you're interested in following the drafts before it becomes published, you'd fine them on http://www.open-std.org/jtc1/sc22/wg14/www/wg14_document_log... (A draft has yet to be posted, though, so you won't find one there yet.)
> Alternatively, please keep it going for a few hours if you would be able to be so generous with your time!
We're remaining active while there are still people asking questions, so the west coast folks should hopefully have the chance to ask what they'd like.
> Do you also answer questions about the standard libraries?
Sure!
> I'm wondering if Apple's Grand Central Dispatch ever made it into a more integrated role in C's libraries, or if it will forever remain an outside add-on.
GCD has not been adopted into C yet, and I don't believe it's even been proposed to do so by anyone (or an alternative to GCD, either).
It would be an interesting proposal to see fleshed out for the committee, and there is a lot of implementation experience with the feature, so I think the committee would consider it more carefully than an inventive proposal with no real-world field experience.
I'd ask them if they really meant "impossible" or just "harder than I wish it was".
I've typically found that the tradeoffs between security, performance, and implementation efforts are usually more to blame for why writing secure C code is a challenge. There are a ton of tools out there to help with writing secure code (compiler diagnostics, secure coding standards, static analyzers, fuzzers, sanitizers, etc), but you need to use all the tools at your disposal (instead of only a single source of security) which adds implementation cost and sometimes runtime overhead that needs to be balanced against shipping a product.
This isn't to suggest that the language itself doesn't have sharp edges that would be nice to smooth over, though!
You're exactly right that this is why there is a distinction between conforming and strictly conforming code.