Google places a pretty high emphasis on code quality and readability. It's not universally great, but it's a big part of the culture. You can catch a glimpse in their [style guides][1], [abseil totws][2] and [aips][3]. Almost every change is required to be reviewed for "readability" in addition to functionality. This can feel like a lot, but it leads to pretty consistent style across the codebase which makes it a lot easier to switch projects or read and understand an unfamiliar section of the code.
I enjoy how they state like 3 times that code reviews should be synchronous, yet "industry-standard" (aka: what people really do) is to toss it over the fence in a PR and go back and forth for several days with stylistic bullshit.
Code reviews are usually synchronous at Google though, commenting and fixing things is like chatting with the reviewer so are usually done quickly. Not sure why this wouldn't be industry standard, is there any reason to make code reviews more painful than that?
I was kinda joking that places like Google might have good processes but most places just cargo cult it and do it incorrectly. My current job does reviews that are so useless I don’t even participate anymore and no one cares. I really want to work at a place where they care about code quality and an effective process.
A lot of them are related to Google-internal practices or libraries, or are like too opinionated (Google has strong internal C++ opinions that aren't necessarily correct or even reasonable elsewhere) to be useful.
110 in particular probably could be public, but it looks like they stopped externalizing them in late 2020, which is kind of sad, so I assume they never got around to it.
Yeah, that sounds about right to me. I used to help edit Testing on the Toilet and there were a lot of internal-tool-specific ones that we never made public. When we were low on content, we would happily publish an issue about someone's internal service/project, for example. It's not that we were trying to hide something, it would just be completely useless to the outside world.
totw/110 was about safely initializing global and static objects. IIUC there was something missing in the open sourced version of Abseil so that it could not be published verbatim.
Anecdotally, I've been having pretty bad anxiety this year from work that I feel should be easy but turns out to have many unexpected wrinkles. In retrospect, it seems like there may have been a lot of "justs" when discussing early solutions.
I've started to take a step back when approaching problems to better understand potential obstacles, but this was still a pretty big toll on my confidence.
I am going through the same thing this year. You're doing the right thing in reevaluating. Have confidence knowing that even if your previous work didn't go how you wanted it to, stopping and taking a look at _how_ you do that work is a sign of progress towards a point where your work meets and exceeds your expectations.
TrustedLen will mean it can safely take the upper bound, but Vec for example will still use the lower bound of the hint when something isn't TrustedLen
Ooh nice, I hadn't seen that. OK, so there are probably places where I assumed the iterator won't do as nice a job of collect() as I did manually but I was wrong.
I wonder how the hell I didn't see that when I was tracking down how collect() ends up caring about TrustedLen.
Collaboration on a C++ codebase can be interesting. One of the most common approaches that organizations use is to choose a subset of the language and discourage committing code that doesn't conform to that subset.
This limits the sharp edges and in particular makes it easier to identify where they might exist. Perhaps the most common rule is the prohibition of exceptional control flow. This can seem really limiting at first, but encoding the possibility of errors using the type system and forcing callers to handle errors explicitly is really powerful.
Sure, the effect of such guidelines, including the "Core Guidelines" is to further shatter the language into effectively incompatible dialects while not really solving anything.
One of the book's authors actually proposed work to get C++ to a place where C++ 20 could begin actually abolishing the things everybody agrees you shouldn't do, Vittorio's "Epochs" proposal. https://vittorioromeo.info/index/blog/fixing_cpp_with_epochs...
But the preference for C++ 20 was to add yet further stuff. I believe that, unwittingly, the C++ 20 process sealed the fate of the language in choosing to land features like Concepts but not attempt Epochs, I'm not sure it was actually possible, but if it wasn't possible in 2020 it's not going to be any easier in 2026 and it's only going to be more necessary.
The active subset is always the set of features not supplanted by the latest Standard supported by tooling in use. Anything newer is safer and nicer than old crap, so there is no temptation to stick to old crap, except among people who have elected not to learn anything new. Those are typically approaching retirement anyway.
There is still a very large set of people using older style C++. particularly those that came from C. I can assure you they are a long long way from retirement.
This is particularly true of large code-bases in established firms.
Personally, I think it would be an interesting idea to soft ban a lot of pointer use and only allow shared and unique. As pointed out in an earlier comment, it is a 'python' like experience without the performance compromise.
This said, the exception is when you need to use well tested existing libraries such as libmicrohttp to implement a local REST server without buying into some mega framework.
There's nothing about Epochs which cares about software licensing.
Yes, you will need the source code to C++ software to actually compile it but the relationship to licensing is at most coincidental.
Much of the time when you really can't negotiate source code, it's because it's a "pig in a poke" IP sale, what you're buying was actually either worthless or the seller never really owned it anyway. The video games industry is certainly rife with this.
"Because of ISO" has to be the lamest C++ excuse, and perhaps the fact it's seeing more use recently reflects a larger problem.
Also, remember, that ISO document is known for two decades to not actually describe the language as implemented, but the fix (pointer provenance) is controversial enough that the committee keeps "forgetting" to fix it. The ISO document describes an imaginary language which would be incredibly slow and useless if it really existed, the document exists because the real C++ language is sort-of similar to this imaginary language and for some people (though it seems gradually fewer over time) that's enough. Any pretence that ISO requires it to "work for everyone" is nonsense since as documented it isn't in fact working for anyone.
† One of the fun side effects is that you can't explain what ARM Morello ports do in terms of the ISO standard. CHERI reifies provenance, which would be fine if the standards document explained what provenance is, but the C and C++ documents deliberately do not do that. On paper Morello ports just waste space to make some valid programs crash, which is weird.
You can do CHERI with Rust exactly as it stands (and people have done that), it's just that usize and isize are bigger than you'd want because now we're stashing provenance, so the other options are worth exploring.
Rust actually talks about provenance (messing with pointers is of course unsafe so it doesn't need to worry about this in most of The Book) so this isn't a mystery in Rust. Indeed the API as it stood before Aria's work already makes it clear that you're stepping off the edge of the cliff if you try to do address arithmetic and similar provenance defeating magic. Aria's "Tower of weakenings" is about firming up some of those first steps, allowing us to do some Road Runner don't-look-down bullshit and know whether we can get away with it on real hardware with the real compiler, while still forbidding things that are definitely crazy (the ISO documents say such things work in C++, but they do not work in actual C++ compilers, they have, drum roll... Undefined Behaviour).
Aria did a bit more than write some blog posts, for example:
Again though, this isn't about licenses. Regardless of why you don't have source code, that's where the problem is. "Gee, it's difficult to compile this program without source code" is maybe something where C wants to draw a line in the sand because of its very simple ABI but C++ is a long way past the point where that's useful and people need to stop pretending otherwise, it's a cause of enormous frustration in the community.
The way Rust epochs work requires a compiler that is fully aware of all existing epochs, and goes through all the code applying the epoch semantics the crate expect on their build definition.
Also epochs can't introduce backwards incompatible semantics that break across epochs, imagine a crate exporting something whose representation at runtime changes between epochs and is used as parameter in some callback implementated in that epoch.
Syntax. Rust (since 1.0 in 2015) has a single Abstract Syntax, and the written syntax of each Edition is translated to the Abstract Syntax.
Rust provides tools which will do the equivalent transformation to your actual code, but of course this transliteration is ugly, so the preferred form for further development of a 2015 edition crate is the 2015 edition code, the transform is useful if you've decided to actually update to a newer edition so that it serves as a working (but perhaps slightly ugly) start.
This is already rather more compatibility than C++ has delivered. Rust 2015 Edition works just fine on a brand new Rust compiler in 2022, whereas a C++ 20 compiler can't always compile valid C++ 98, C++ 08, C++ 11, C++ 14 or C++ 17 code except via a "version" switch of some sort. They are, ultimately six distinct versions of the language.
It's true that this limits what is possible with a Rust Edition. However, because so much is possible it inspires people putting in that extra little bit of effort to get it done compatibly.
Example, Rust 1.0 you couldn't just iterate over an array. The built-in arrays don't implement IntoIterator because that needs const generics and Rust 1.0 didn't have const generics. Fast forward to Rust 2021 edition, and this works. But wait, that's syntactically impossible. You can't have arrays only implement IntoIterator on newer editions, that's not syntax.
So there's a hack. On modern compilers for Rust 2015 and 2018 editions, when you write x.into_iter() if "x" is an array, it silently ignores the fact it knows arrays are IntoIterator and continues considering other options, crucially (&x).into_iter() the iterator over a reference.
That's all the hack does though, so arrays are IntoIterator, you can use them as you'd expect for a container even from Rust 2015 code now, you can for loop over them, you can pass them to functions which want IntoIterator, you just can't use this method call syntax which would always have done something else.
This feature costs Rust a little hack in the 2015 and 2018 edition parsers forever and that's all.
I bet if Rust manages to achive the same market size that C++ enjoys in 2022, and various kind of compilers, OS support and deployment scenarions, during the next 40 years, those hacks will be just as messy as a language switch in any modern language nowadays.
Unfortunely most likely I won't be around to collect it.
I can't guess what would constitute "the same market size" especially because as we saw C++ is actually six different slightly incompatible languages.
For the purposes of defining their "success" people like Stroustrup seem to consider that the firmware developer who compiles their caveman 1990s C code with a Gnu C++ compiler the board vendor provided is a "C++ programmer" although if you ask that developer about say, operator overloading (a C++ 98 feature) they will look blank 'cos even C89 is a bit novel for them.
At the other extreme, HN resident C++ apologist ncmncm seems clear that nothing short of C++ 20 (a language which is documented but not yet fully working in compilers you can get) really counts as C++ and so if you're not writing Modules with Concepts (and presumably reporting Internal Compiler Errors left and right as you work) that's not really C++
Those are some very different size "markets". I suspect Rust is already similar numbers to the latter, but is a very long way off the former.
But yes, a 47 year old Rust will be crusty, however with any luck PL research didn't stand still for forty years and we'll be recommending people adopt something a bit less crusty for new work. One of the things that's different is obviously the Rust community and its ecosystem and that means it's not necessarily about winning converts for "the cause". A better language than Rust is a good thing not a bad thing.
Because different types of software make good use of different subsets of C++. A "footgun" for one codebase is a "high-leverage feature" for another code base. The subset of "no footguns" C++ that is common to virtually all applications probably asymptotically converges on the empty set. A feature of C++ is that you can precisely select an optimally expressive subset for most applications instead of being forced to write code in language that clearly wasn't designed with your use case in mind.
Having a lot of tools in your toolbox doesn't necessitate using all of them in inappropriate contexts. I don't try to hammer a screw just because I own a hammer.
This analogy sounds nice but is deceptive. The problem is there's no place where a foot gun is appropriate. I need a toolbox for screwing things so I get a screwdriver toolbox, but in that toolbox is a foot gun designed to blow my foot off. Why? What context is that foot gun appropriate? Should I put it in the hammer toolbox? C++ has footguns everywhere and there's no context where many of those footguns are appropriate.
The other thing with C++ is the complexity. The toolbox is so jam packed full of millions of tools I have basically can't comprehend the full ecosystem and how everything works. Additionally I pick one tool and that single tool itself has like 20 different ways of being used with a bunch of edge case gotchas.
Because that subset can still be better at solving your problem than those other languages - it could be faster, more mature, better at solving certain problems, you can leverage existing ecosystem etc.
This is exactly true. Noone in the C++ world uses all of C++ in their project. It is a multi-paradigm language, and you are free to choose one (or a few) of the paradigms and follow it throughout the entire project. For example, you can choose to base your project on virtual functions and interfaces, or you can choose to employ static polymorphism instead. Remarkably, even the C++ standard library, for all its intricacy and complexity, does not use all of C++. Also, it is useful to see C++ as two languages, one for library developers, and one for developers of applications.
There are lots of other advantages and disadvantages to any given language. You’re right that this is a mark against C++, but wrong in the assumption that it (always) outweighs all other considerations (performance, familiarity, compatibility with existing C++ codebase, portability to odd architectures). (Personally, I don’t like C++ and prefer Rust. But it is also reasonable to choose C++.)
> If I have to use a subset of a language to avoid shooting my feet, why not just use a language without footguns?
Because it has absolutely nothing to do with footguns.
The main factors in picking subsets, beyond personal tastes on coding styles and office politics, boil down to a) introducing exceptions in legacy code, and thus the need to revalidate what is otherwise solid code, b) not be forced to deal with the complexity of template metaprogramming when the value it adds back is negligible.
Standard ML is incredibly usable but also very small (syntactically and semantically) especially compared with modern algol-family languages.
There are a couple of quality of life features missing (most notably is record update syntax), but I really enjoy that the core language has not changed in 25 years.
There is also ongoing discussion and other work on something called "successor ML" (sML). See smlfamily.org for further information. Functional record update is one of the features contemplated.
It's great functional language to learn pattern matching, type inference, polymorphism, etc in an academic setting. Great language to compare and contrast to other languages ( dynamic, OOP, imperative, etc ), but I'm not sure how useful it is as I've never seen it used in the business world. When all you knew was C/C++/Java, standard ml throws you for a loop - in a good way.
At my current job we use Scala extensively. I didn't have much experience with it, but found it pretty easy to pick up, mostly due to its similarity to StandardML (much moreso than e.g. Java, Haskell or Python; which are also clearly influences)
In fact, my only previous experience with Scala was from hacking on Isabelle/HOL, which is mostly StandardML, but happens to use Scala in the periphery (scripting, UI, etc.)!
The Curry-Howard isomorphism doesn't say that an interesting proof corresponds to a program anyone would care about (other than as an equivalent to the proof), nor that an interesting program corresponds to a proof anyone would care about (other than as an equivalent to the program).
In practice, writing programs and finding proofs feel like quite different activities, and at least in my experience of doing both I almost never try to write a program by thinking "how could I prove that such-and-such a type is nonempty" and I literally never try to prove something by finding a corresponding type and writing a program that computes or represents something of that type.
The Curry-Howard isomorphism is a beautiful theorem, and as you say it inspires some nice methods of automated proving / proof-checking, but I don't think it says anything much about the relationship between the programming and theorem-proving activities that actual humans engage in.
> Any time you write a program, you are also writing a proof.
True, and (mostly) irrelevant. When I write a program, I am also writing a proof, but I don't care about it as a proof. It proves something that I am completely uninterested in proving. But as a program, it's a program that I am interested in running.
> Interesting that you contradict yourself in successive sentences
No, I didn't. You managed to either completely miss my point, or to completely ignore it.
Just in case you didn't ignore it, let me try again. I could care about a proof for the sake of the proof. If that were the case, I probably wouldn't write a program, I'd just write the proof. (Yes, that could be mechanically turned into a program, but I didn't actually do that, because I cared about the proof rather than the program.)
When I write a program, I usually care about it as a program, not as a proof. When I write a GUI, say, I care about how it enables the users to access the rest of the program's features. You can turn that into a proof, and the compiler may in fact do so, but what it proves is something that is really unrelated to what I'm trying to do. The "proof" won't prove anything about usability or human factors or pleasing color scheme or user workflow or efficient use of screen real estate. It will prove something that, from the programming point of view, is completely uninteresting.
So, as I said, I'm not interested in the proof, I'm interested in the program. And, despite the correspondence, they are not the same thing.
>I'm not interested in the proof, I'm interested in the program
Maybe not in the abstract - but you are, in fact, "interested in the proof": because when the program doesn't do what you want it to, you either have to fix it so it does prove your use case, or you've proven that what you were trying to do was incorrect
There's a middle-ground here where the input and output types of a function (informally) denote the pre- and post-conditions and a (correct) program is equivalent to a proof that your pre-condition implies the post-condition.
A function List -> Sorted List can both be considered a proof that lists can be sorted as well as a procedure for sorting lists.
Programs are proofs in the same way solutions to arithmetic's problems are proofs, they typically don't prove anything interesting so are not considered proofs even though in practice you could map every program to some set of mathematical statements and vice versa.
Edit: And for this topic in question, If you don't understand what you are proving then you aren't working with proofs. I really doubt that programmers actually constructs their programs by mapping it to mathematical statements, seeing what it proves and then mapping it back to a program, so they are writing programs, not proofs.
This one seems highly relevant: https://wizardzines.com/zines/containers/
This was also posted on HN a couple weeks ago: https://earthly.dev/blog/chroot/