Increasingly popular are the "single header libraries", which were popularised by Sean Barrett[0]. It's as simple as downloading the header, putting it somewhere in your tree and #including it where you need it. It's especially useful for redistributing libraries, but I've also found it useful to create these in my own projects.
Is having large amounts of code in headers common in real world C code (aside from stb)? Seems like it would be a nightmare to chase down issues, though maybe compiler error messages have gotten smarter since I've last dealt with C.
Header files are not special in C. It is a common convention to put interface in .h and implementation in .c files. But the preprocessor expands all the #includes together into one big file before the compiler proper attempts to compile it.
Modern compilers have error messages that show you the whole chain of #includes, and on top is the filename and line number, whether it ends with ".c" or ".h" or anything else.
Exe size -- Sean usually includes a macro that lets you include the implementation only once (so you can use it as a pure header in multiple places, then as a definition just once).
Compile times -- if you use that macro, the implementation will be skipped by the preprocesser which is really fast.
Even if you build the whole code multiple times in your project, C compilers are also very very fast these days.
Many people follow Sean's style but use C++. That can be much slower to compile if you use fancy features. But if you avoid templates it's usually fine.
It's interesting that many big C++ projects are also header-only, but for totally the opposite reason -- they use templates everywhere so nothing can be compiled separately. That approach definitely does slow down compilation and increase your code size.
Yes, but unfortunately for C++ libraries that make heavy use of templates, it's the only solution. Because templates need to be specialized to generate any code, they can't be compiled ahead of time into shared or static libraries. So in C++ you have this stupid non-sense.
Instantiating the templates at link time would probably make more sense. With LTO we're already deferring a significant portion of the compilation process to link time so this is an obvious extension.
This issue was also on the mind of the C++ standards committee when they began work on the new module system. I'm not sure how the current spec behaves with respect to template instantiation, though.
C++98 contained some provisions for that ("export template"), but anecdotally there is exactly one compiler that supports it and the whole mechanisms was found to cause more problems than it solves. See: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n142...
Problem with template instantiation is that the mechanism is too general to be amendable to some kind of meaningful precomputation.
With Lisp you also need to have the macro definition ready when compiling code that depends on it. This is essentially the problem that all the various "system definition facilities" try to solve in at least somewhat usable manner.
You need the package definition for the source-code-read time of whatever you're compiling, and the macros for its macro-expansion time. Practically speaking, these are tied together.
It is not equally powerful, but Java's approach is easure. This means that at compile time the language can differentiate List<String> and List<Buffer>, and will type check them correctly, but at runtime the generic type gets "erased" and all the code turns into List<Object> (where in Java all types derive from Object).
As a result, there do not need to be multiple specializations of the List type for strings and buffers. No matter how many different parameterizations there are of a given generic class, there is only one implementation. I think this is probably a good thing most of the time, since on average the performance gains of this form of specialization are insignificant compared to the costs of code size. You wouldn't want 100 or more different implementations of List clogging memory and needing to be JIT-compiled.
One of Java's constraints is also that all generic types are reference types. So one reason why it wouldn't be too useful to instantiate separate List<String> and List<Buffer> is because these are basically just List<ReferenceToObject> anyway. C++ templates can provide both value and reference semantics, and can work on primitive types such as `int` which may reside on the stack, whereas Java generics only operate on full-fledged garbage collected objects that are dynamically allocated on the heap.
The disadvantage of this approach is that it precludes some constructs that are possible in other languages. For example, in C++ a template method could take some type `T` and construct a new instance of it. In C++ you could do this, but in Java it's not legal:
<T> T broken() { return new T(); }
In C++ you could also create a new local variable of type `T` that will be allocated on the stack. This `T` can represent anything from a primitive `int` to a complex struct or class, or a point or reference to a dyanmically allocated object. The C++ code could then dynamically allocate a new instance of `T`.
This makes it more difficult to use implement generic programming patterns analogous to things like std::allocator in C++, and to deliberately specify different implementations of parameterized types and algorithms. Fortunately these are not too significant of practical downsides in Java.
All modern compilers support precompiled headers. If it's a library external to your project, you just compile it once into .pch file and when you change your own code, the .pch is just loaded without having to compile the headers again.
I tend to use a single compilation unit, and use plain C. For a 50k line project I have build times of 1-1.2sec. On top of that I'm also linking to opengl, windows.h and a couple of headers in the standard lib.
Exactly :) I come from a .net background, where compile times are kinda bad compared to what I get in C. I had also heard of big C++ projects that could take more than 30 mins to build. So I was always scared to move to native code. Until I learned the single compilation unit trick! And if compile times were to get slow again, there are many more ways to bring it down even further,
Since when are .Net compile times worse than c? Csc compiles extremely fast, most long compile times are caused by msbuild doing... not much at all really.
I work on a much bigger project and we use that method (unity builds) - it helps for full builds and on small projects, but it makes incremental changes very slow on bigger projects.
C++ is a little different in that templates have to go in header files. A lot of template-based C++ libraries are entirely templates in header files.
It's weird because it does feel like it violates the separation of interface and implementation. On the other hand, it's really efficient. It has the advantages of doing generics with the C preprocessor (efficiency and de-duplication of code, if nothing else), minus many of the disadvantages of doing generics with the C preprocessor.
Putting non-template code or data in headers is something I've seen people do for expediency, but it's more trouble than it's worth IMO. As soon as two files in the same project include it, you risk a future headache.
Technically you can put the template definition in a separate translation unit and rely on explicit instantiation. The linker will tell you exactly what you need to instantiate. It is tedious and seldomly done.
Inline is the keyword here (litterally). Inline, in C++, is defined as disabling the requirement of having exactly one definition of a function (it is also a weak hint for the compiler to perform inlining).
Template functions are implicitly inline.
Inline in C has a similar but subtly different definition.
It's called a header for a reason, unless we are now redefining the meaning and intent of headers, so that people could stick C++ template code in them?
Oh, where is the officially sanctioned definition of the meaning and intent of a header?
So glad you asked: The C Programming language, 2nd edition, page 82, chapter "FUNCTIONS AND PROGRAM STRUCTURE", as follows:
"There is one more thing to worry about--the definitions and declarations shared among the files. As much as possible, we want to centralize this, so that there is only one copy to get right and keep right as the program evolves.
Accordingly, we will place this common material in a header file, calc.h, which will be included as necessary."
Despite what the code may look like, they all have a very easy to use API. Take stb_image.h for example. Most people need just two functions, stbi_load(), which loads any supported image format to a byte array. And stbi_image_free(), which frees the data.
But if you need anything more than that it's all there. e.g. loading from memory, loading via callbacks, support for HDR images, support for custom allocators, preprocessor flags let you exclude code for unused image formats, etc etc
Firefox has more than a 4% market share[0][1]. One of the main reasons I've heard, is that it's free, and that the Mozilla Foundation is a non-profit.
I also just found this bit, that explains how some browsers might be over or under estimated, because of how their internals work[2]. So we really don't know what the market shares really are.
Using a Google to check the marketshare of a product rival to Google's Chrome. It's interesting how important Google Search is and how powerful it could be. (BTW I certainly don't imply Google's algorithms are skewed)
My biggest gripe with OOP is the Oriented part. If you design your entire codebase around OOP you will run into architectural problems. Especially with so-called Cross Cutting Concerns[0]. The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out. I have heard this approach being called "Compression Oriented Programming", but I don't care much for what people want to call it.
This approach doesn't mean no objects ever. But only when your problem actually calls for it. Likewise you will also end up with parts that are purely functional, data-oriented, etc. But they will be used where they make sense.
On top of that I'm also using pure C99. It does away with a lot of the fluff and cruft in other languages. In the past I used to try to fit my problems into whatever the most fancy language features I was offered. Which cost me a lot of time analysing. Now I just solve my problem.
Mind you, C is not a perfect language. There are features I wish it had. But for my approach to programming it is the most sensible to use. Apart from maybe a limited subset of C++ (such as function overloading and operator overloading for some math)
> The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out.
That is the same technique that Stephanov describes in 'From Mathematics to Generic Programming'.
Thanks for finding that out. We (Firefox) fixed a box shadow performance bug [1] in Firefox 47, which will be released on June 7th. It looks like this site is hitting that perf bug.
The most surprising detail to me was how much water they have to drain out of the metro each day:
"As a result, he said, they “discharge approximately 2-million gallons of water a day.” In other words: about three Olympic sized swimming pools worth."
The Community edition is basically Pro, with different licensing. From what I understand, that's the version that would make the most sense for the target audience of DreamSpark (college students) anyway.