For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | saticmotion's commentsregister

I'm really glad they added a dark mode. But performance is absolutely horrible, I don't get how this made it through QA.


Probably the "move fast and break things" approach.


For those who want a very in depth tour of all metal genres, with example songs, check this website out: http://mapofmetal.com/#/home


Increasingly popular are the "single header libraries", which were popularised by Sean Barrett[0]. It's as simple as downloading the header, putting it somewhere in your tree and #including it where you need it. It's especially useful for redistributing libraries, but I've also found it useful to create these in my own projects.

[0]https://github.com/nothings/stb


Is having large amounts of code in headers common in real world C code (aside from stb)? Seems like it would be a nightmare to chase down issues, though maybe compiler error messages have gotten smarter since I've last dealt with C.


Header files are not special in C. It is a common convention to put interface in .h and implementation in .c files. But the preprocessor expands all the #includes together into one big file before the compiler proper attempts to compile it.

Modern compilers have error messages that show you the whole chain of #includes, and on top is the filename and line number, whether it ends with ".c" or ".h" or anything else.


Doesn't this hurt compile times and increase exe size?


Not really!

Exe size -- Sean usually includes a macro that lets you include the implementation only once (so you can use it as a pure header in multiple places, then as a definition just once).

Compile times -- if you use that macro, the implementation will be skipped by the preprocesser which is really fast.

Even if you build the whole code multiple times in your project, C compilers are also very very fast these days.

Many people follow Sean's style but use C++. That can be much slower to compile if you use fancy features. But if you avoid templates it's usually fine.

It's interesting that many big C++ projects are also header-only, but for totally the opposite reason -- they use templates everywhere so nothing can be compiled separately. That approach definitely does slow down compilation and increase your code size.


Yes, but unfortunately for C++ libraries that make heavy use of templates, it's the only solution. Because templates need to be specialized to generate any code, they can't be compiled ahead of time into shared or static libraries. So in C++ you have this stupid non-sense.


Stupid nonsense? What would your alternative be, mind you, an alternative that would be as powerful and performant? Is there even such a thing?


Instantiating the templates at link time would probably make more sense. With LTO we're already deferring a significant portion of the compilation process to link time so this is an obvious extension.

This issue was also on the mind of the C++ standards committee when they began work on the new module system. I'm not sure how the current spec behaves with respect to template instantiation, though.


C++98 contained some provisions for that ("export template"), but anecdotally there is exactly one compiler that supports it and the whole mechanisms was found to cause more problems than it solves. See: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n142...

Problem with template instantiation is that the mechanism is too general to be amendable to some kind of meaningful precomputation.


Lisp does fine with separately compiled macros, which are more powerful than templates.


With Lisp you also need to have the macro definition ready when compiling code that depends on it. This is essentially the problem that all the various "system definition facilities" try to solve in at least somewhat usable manner.


Also, the package definition.

You need the package definition for the source-code-read time of whatever you're compiling, and the macros for its macro-expansion time. Practically speaking, these are tied together.


It is not equally powerful, but Java's approach is easure. This means that at compile time the language can differentiate List<String> and List<Buffer>, and will type check them correctly, but at runtime the generic type gets "erased" and all the code turns into List<Object> (where in Java all types derive from Object).

As a result, there do not need to be multiple specializations of the List type for strings and buffers. No matter how many different parameterizations there are of a given generic class, there is only one implementation. I think this is probably a good thing most of the time, since on average the performance gains of this form of specialization are insignificant compared to the costs of code size. You wouldn't want 100 or more different implementations of List clogging memory and needing to be JIT-compiled.

One of Java's constraints is also that all generic types are reference types. So one reason why it wouldn't be too useful to instantiate separate List<String> and List<Buffer> is because these are basically just List<ReferenceToObject> anyway. C++ templates can provide both value and reference semantics, and can work on primitive types such as `int` which may reside on the stack, whereas Java generics only operate on full-fledged garbage collected objects that are dynamically allocated on the heap.

The disadvantage of this approach is that it precludes some constructs that are possible in other languages. For example, in C++ a template method could take some type `T` and construct a new instance of it. In C++ you could do this, but in Java it's not legal:

    <T> T broken() { return new T(); }
In C++ you could also create a new local variable of type `T` that will be allocated on the stack. This `T` can represent anything from a primitive `int` to a complex struct or class, or a point or reference to a dyanmically allocated object. The C++ code could then dynamically allocate a new instance of `T`.

This makes it more difficult to use implement generic programming patterns analogous to things like std::allocator in C++, and to deliberately specify different implementations of parameterized types and algorithms. Fortunately these are not too significant of practical downsides in Java.


I'd rather have code in headers and longer compiler times than type erasure.


C# is capable to instantiate templates at runtime/dynamic link-time so there is no need to compromise.

Instantiation at static link time would be feasible in C++, but runtime instantiation would require an heavy runtime which is frowned upon in C++.


You could put metadata in the object file which then allows the templates to be expanded at link time.


> as powerful and performant?

as the C preprocessor? Are you kidding?


All modern compilers support precompiled headers. If it's a library external to your project, you just compile it once into .pch file and when you change your own code, the .pch is just loaded without having to compile the headers again.


I tend to use a single compilation unit, and use plain C. For a 50k line project I have build times of 1-1.2sec. On top of that I'm also linking to opengl, windows.h and a couple of headers in the standard lib.


The speed up is easy to explain. Instead of launching gcc 50 times, it's now once for all.


Exactly :) I come from a .net background, where compile times are kinda bad compared to what I get in C. I had also heard of big C++ projects that could take more than 30 mins to build. So I was always scared to move to native code. Until I learned the single compilation unit trick! And if compile times were to get slow again, there are many more ways to bring it down even further,


Since when are .Net compile times worse than c? Csc compiles extremely fast, most long compile times are caused by msbuild doing... not much at all really.


I work on a much bigger project and we use that method (unity builds) - it helps for full builds and on small projects, but it makes incremental changes very slow on bigger projects.


If the code is included in the headers, then yes; if only definitions are included, then no.


truetype font rendering: stb_truetype.h: LOC=3287

Wow. I am speechless. Going to dive in another day.


Doesn't Boost have a number of header-only libs? I believe there is a header-only regex implementation for example.


C++ is a little different in that templates have to go in header files. A lot of template-based C++ libraries are entirely templates in header files.

It's weird because it does feel like it violates the separation of interface and implementation. On the other hand, it's really efficient. It has the advantages of doing generics with the C preprocessor (efficiency and de-duplication of code, if nothing else), minus many of the disadvantages of doing generics with the C preprocessor.

Putting non-template code or data in headers is something I've seen people do for expediency, but it's more trouble than it's worth IMO. As soon as two files in the same project include it, you risk a future headache.


The compiled result of template-heavy C++ can be very efficient, but the compilation process is notoriously inefficient. Just to be clear :)


Technically you can put the template definition in a separate translation unit and rely on explicit instantiation. The linker will tell you exactly what you need to instantiate. It is tedious and seldomly done.

/Pedantic


It is worth noting that header files should not include code, just macro and variable definitions.


There is nothing wrong with having code in headers.


If you put a bare function or variable in a header, then use it in multiple compilation units, the linker will reject your multiply-defined symbols.

(Templates get de-duplicated).


Unless you mark them 'static', which is common practice for constants and small inlined functions.


Inline is the keyword here (litterally). Inline, in C++, is defined as disabling the requirement of having exactly one definition of a function (it is also a weak hint for the compiler to perform inlining).

Template functions are implicitly inline.

Inline in C has a similar but subtly different definition.


It's called a header for a reason, unless we are now redefining the meaning and intent of headers, so that people could stick C++ template code in them?


Oh, where is the officially sanctioned definition of the meaning and intent of a header?

What we call them doesn't matter. There is nothing wrong with putting code in one. You can call them source files if you prefer.


Oh, where is the officially sanctioned definition of the meaning and intent of a header?

So glad you asked: The C Programming language, 2nd edition, page 82, chapter "FUNCTIONS AND PROGRAM STRUCTURE", as follows:

"There is one more thing to worry about--the definitions and declarations shared among the files. As much as possible, we want to centralize this, so that there is only one copy to get right and keep right as the program evolves. Accordingly, we will place this common material in a header file, calc.h, which will be included as necessary."

https://youtu.be/4PaWFYm0kEw?t=2237


Right, definitions and declarations. Common material. Code.


Oh, this kind of horror I've met lately has got a name? Well, I learned something today. Thanks ;-)


Despite what the code may look like, they all have a very easy to use API. Take stb_image.h for example. Most people need just two functions, stbi_load(), which loads any supported image format to a byte array. And stbi_image_free(), which frees the data.

But if you need anything more than that it's all there. e.g. loading from memory, loading via callbacks, support for HDR images, support for custom allocators, preprocessor flags let you exclude code for unused image formats, etc etc


You can copy the functions into a .c file if it makes you feel better.


Firefox has more than a 4% market share[0][1]. One of the main reasons I've heard, is that it's free, and that the Mozilla Foundation is a non-profit.

I also just found this bit, that explains how some browsers might be over or under estimated, because of how their internals work[2]. So we really don't know what the market shares really are.

[0]http://gs.statcounter.com/#browser-ww-yearly-2008-2016

[1]https://www.w3counter.com/globalstats.php

[2]https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Ac...


Oh. I guess the source I used was garbage. I just typed in Firefox browser share on google and looked at the first result and it looked legit.

It seems their share is shrinking quite fast though.


Using a Google to check the marketshare of a product rival to Google's Chrome. It's interesting how important Google Search is and how powerful it could be. (BTW I certainly don't imply Google's algorithms are skewed)


My biggest gripe with OOP is the Oriented part. If you design your entire codebase around OOP you will run into architectural problems. Especially with so-called Cross Cutting Concerns[0]. The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out. I have heard this approach being called "Compression Oriented Programming", but I don't care much for what people want to call it.

This approach doesn't mean no objects ever. But only when your problem actually calls for it. Likewise you will also end up with parts that are purely functional, data-oriented, etc. But they will be used where they make sense.

On top of that I'm also using pure C99. It does away with a lot of the fluff and cruft in other languages. In the past I used to try to fit my problems into whatever the most fancy language features I was offered. Which cost me a lot of time analysing. Now I just solve my problem.

Mind you, C is not a perfect language. There are features I wish it had. But for my approach to programming it is the most sensible to use. Apart from maybe a limited subset of C++ (such as function overloading and operator overloading for some math)

[0] https://en.wikipedia.org/wiki/Cross-cutting_concern


> The way I tend to write code, is to just start with my main function and write whatever procedural code I need to solve my problem. If I start seeing patterns, in my data or algorithms, that's when I start pulling things out.

That is the same technique that Stephanov describes in 'From Mathematics to Generic Programming'.


Disabling the box shadow on .content-root fixes the performance bug (FF46.0).


Thanks for finding that out. We (Firefox) fixed a box shadow performance bug [1] in Firefox 47, which will be released on June 7th. It looks like this site is hitting that perf bug.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1250947


We just discovered the same by disabling the media query which handles the shadow. Thank you very much for investigating.


Seems to work fine on FF (46) now, thanks.


This seems pretty draconic. To compare with a recent law passed here, in Belgium:

- A fee of 4.000 Euro to bring a product to market.

- Juice bottles of maximum 10ml

- No remote selling of e-cigarettes (but liquids are for some reason allowed)

- No advertising (with some exceptions)


(Just a small note: "draconic" means "dragon-like". You mean "draconian".)


Thanks. It's "draconisch" in dutch. And words with -isch are typically translated to -ic in english, hence the confusion.


Wtf is with the max of 10mL? That's asinine. God forbid anyone get a bulk discount..?


The most surprising detail to me was how much water they have to drain out of the metro each day:

"As a result, he said, they “discharge approximately 2-million gallons of water a day.” In other words: about three Olympic sized swimming pools worth."



Didn't DreamSpark used to have the Pro edition? Now for both 2013 and 2015 it's the community edition


The Community edition is basically Pro, with different licensing. From what I understand, that's the version that would make the most sense for the target audience of DreamSpark (college students) anyway.


Yes. In fact I downloaded the pro version about two weeks ago, so it seems they only removed it relatively recently. I hope they add pro 2015.


No idea about Pro, but I downloaded 2013 Ultimate from Dreamspark.


I think you might be able to find more details on Blaauw's website[0]. I've also found a page with their publications[1]

[0]http://blaauw.eecs.umich.edu/project.php

[1]http://blaauw.eecs.umich.edu/resource.php?grp=1


From that list, a 2013 overview of the platform (pdf): http://blaauw.eecs.umich.edu/getFile.php?id=498

AnnArbor SmartZone funding, http://www.a2gov.org/departments/finance-admin-services/smar...

U of M test track for smart/autonomous cars, http://www.mlive.com/news/ann-arbor/index.ssf/2014/03/univer...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You