For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | i2om3r's commentsregister

> The same [1] claims in "Visual Summary" that the "cycles/byte" is 1 for various PRNGs but http://xoshiro.di.unimi.it/ seems to show that the reason splitmix64 is not preferred everywhere is that xoroshiro128+ is roughly two times faster than splitmix64 .

I have tested xoroshiro128+ vs splitmix64 in several procedural generation & simulation code bases in C and Swift. I could never confirm the numbers on http://xoshiro.di.unimi.it/. In fact, splitmix64 was slightly faster in all my tests with different optimizations enabled. I always assumed that's because its state only occupies a single register which certainly matters in practical applications (especially in C with its restricted calling conventions). I am not absolutely sure whether that was always the reason, though.


You seem to be talking about a different kind of leakiness. In my mind, there are two kinds: conceptual and performance leakiness. You are talking about the latter. Pretty much any non-trivial system on modern hardware leaks performance details. From what I understand, git's UI tries to provide a different model that the actual implementation but still leaks a lot of details of the implementation model.


Which book(s) would you recommend?


I always wondered why many non-Latin (mostly cursive) scripts have little variation across different typefaces. Maybe I wasn't looking hard enough? Well, the article mentions a similar observation by a Sri Lankan typographer, so I guess I am not alone. Can someone maybe point me to other non-Latin typefaces that have "their own typographic style"? I found the Baloo samples (last one in the article) refreshing. The style of the Tamil and Devanagari samples is very close to the Latin sample. For the Mina samples (first figure in the article), I can see that they try to capture the character of the Exo Latin typeface, with certain strokes getting narrower towards the end and its superelliptic curves (are there typographic terms for these?). I am not used to reading Bengali, but the style of the sample looks like it is in a font that has a different weight.


Arabic has significantly more variation than Latin scripts and always has. Given that painting of living beings is typically not allowed in the muslim religion, calligraphy came to be the artistic expression of choice.


> Can someone maybe point me to other non-Latin typefaces that have "their own typographic style"?

CJK scripts have traditionally used Ming, Song, and Gothic (sans-serif) typefaces.


I guess what I mean is styles besides gothic (undecorated strokes of even thickness) and those that mimic traditional calligraphy. I see a lot of other forms for Latin scripts, but for Arabic, CJK and Indian scripts most typefaces fall into the above two categories. I might be wrong though, maybe I am just not exposed to a lot of variety. I do find it most notable in mixed script printed text, e.g., Arabic, where the predominant form seems to be Naskh, which looks like calligraphy, and Latin, which typically doesn't look like calligraphy at all. This mix creates an image that looks very uneven to me, similar to when people use too many different typefaces in Latin text. Actually, I am not even sure whether the typographic style is dictated by Naskh, or whether its just the form of writing.


Tamil and Devanagari scripts are close to latin? in what way? Please, can you elaborate?


I didn't phrase it well. I was specifically talking about the Baloo sample and how the typographic style is the same across scripts. Sorry, I am a layman, so I don't know the proper terms, but I mean the similar stroke weight and curvature, strokes endings being pointy on one side and round on the other etc.


In addition to the higher unit count there are a few bullet points in the article that indicate where saved/additional time is spent. From the sound of it, most importantly: "The DE pather was modified to have a much higher max iteration count than Age1's, so longer and more complex routes can be found."


For reference, here is the video: https://youtu.be/MtzCLd93SyU?t=19m28s


THANK YOU!!!!! I've looked for that video several times and have been kicking myself for not bookmarking it.


You can explain things with more than a single picture. You can also play with pictures in your mind, and you can encourage students to do so by giving them more than one picture. Visualizing corner cases is often very useful. For triangles, you can change angles and side lengths and see which facts still hold and which don't and how certain values/function results develop.

Moreover, it very much depends on who your target group is. My brain, for example, works entirely inductively (in the beginning). I won't be able to develop an intuition of something if I don't start with examples. Pictures are often good examples. During my undergrad studies, my linear algebra prof was as critical about pictures as you and other commenters here. I hated it. I was never able to get an intuition about the more abstract topics until I saw concrete examples including pictures in later lectures and projects. Moreover, not everyone is going to be a theoretical mathematician or quantum physicist. I suspect that by not showing pictures, you usually lose more students along the way than pictures would ruin students that need a fully abstract understanding (later). It would be interesting to see some data on this, but I guess its going to be difficult to collect.


I can't say anything about JSON serialization performance as I don't have experience with it. Your other points, though, seem a little handwavy to me. Or maybe I am reading them wrong.

> A tiny, somewhat extreme example: Swift allocates local variables on the heap. (...)

Have you ever encountered a local variable that spuriously didn't get stack promoted? I haven't. As I said elsewhere, I regularly read the generated code for my hot loops. Also, when profiling with Instruments, I have never been surprised by a heap allocated local variable that didn't escape. I also don't see why stack promotion theoretically would be a less precise analysis then doing it the other way around. I imagine that if the optimizer misses to promote a local variable, it would be a bug in the same way it would be a bug if an escaping local variable spuriously didn't get boxed (for compilers working the other way). Just that it won't fail at runtime, which might increase the potential for undiscovered bugs. But again, have you ever been bitten by this?

> And of course generics are implemented via witness tables, so indirect dispatch at roughly similar costs to objc_msgSend()

Generic types are opportunistically specialized and in my experience, the optimizer has gotten a bit better in that regard. I find that a nice compromise between C++ and, say, Java. You can also influence the optimizer's decision with various not-yet-stable annotations (@specialized, for example). Sure, if you want to write reliably fast generic code in Swift, you need to know a few things. None of the above is possible in Objective-C, though, because of its type system.

> I just saw something about blocks always causing heap allocations (and this is corroborated by an attempt someone made to port some HTTP parsing code from C to Swift. Even with max. optimizations and inline craziness, it was ~3x slower).

If by blocks, you mean closures, then yes, they are heap allocated if they escape. For non-escaping closures, there is always a way to force an inline unless you pass them to compiled third-party code. Cross-module optimization is an area that is still being worked on. Without knowing anything about the code in the benchmark, from your description, it sounds to me that there is unused potential for optimizations, either by making the code more idiomatic and/or by using one or two annotations. Which brings me to my last point.

> What benchmarks are you looking at?? While it wouldn't be true that I've never seen a Swift advantage, it's pretty close to never.

Do you have links? Not that I looked too thoroughly, but I have never encountered a benchmark comparing Swift with Objective-C (or other languages?) that both (1) showed significant worse performance for Swift across the board and (2) that I trust. Most recent code I have seen that does not perform well could fairly easily be improved or would have to be rewritten in more idiomatic Swift. I specifically say most, since there certainly is still room for improvement, but in my experience it is nowhere as bad as your comment suggests.


> Generic types are opportunistically specialized and in my experience, the optimizer has gotten a bit better in that regard

That's always the answer: "the compiler has gotten better and will get better still". Your claim was that Objective-C has all this "extra work" and indirection, but Swift actually has more places where this applies, and pretends it does not. With Objective-C, what you see is what you get, the performance model is transparent and hackable. With Swift, the performance model is almost completely opaque and not really hackable.

>None of the above is possible in Objective-C, though, because of its type system.

What does the "type system" have to do with any of this? It is trivial to create, for example, extremely fast collections of primitive types with value semantics and without all this machinery. A little extra effort, but better and predictable performance. If you want it more generically, even NeXTSTep 2.x had NXStorage, which allowed you to create contiguous collections of arbitrary structs.

Oh...people seem to forget the Objective-C has structs. And unlike Swift structs they are predictable. Oh, and if you really want to get fancy you can implement poor-man's generics by creating a header with a "type variable" and including that in your .m file with the "type variable" #defined. Not sure I recommend it, but it is possible.

The fact the Foundation removed these helpful kinds of classes like NXStorage and wanted to pretend Objective-C is a pure OOPL is a faulty decision by the library creators, not a limitation of Objective-C. And that Foundation was gutted by CoreFoundation, making everything even slower still was also a purely political project.

In general, you seem to be using "Objective-C" in this pure OOPL sense of "Objective-C without the C" (which is kind of weird because that is what Swift is supposed to be, according to the propaganda). Objective-C is a hybrid language consisting of C and a messaging layer on top. You write your components in C and connect them up using dynamic messaging. And even that layer is fairly trivial to optimize with IMP-caching, object-caching and retain/release elision.

Chapter 9 goes into a lot of details on Swifft: https://www.amazon.com/gp/product/0321842847/

A few Swift issues surprised me, to be honest. For example native Swift dictionaries with primitive types (should be a slam dunk with value types and generics) are significantly slower than NSDictionary from Objective-C, which isn't exactly a high performance dictionary implementation. About 1.8x with optimizations, 3.5x without.

This is another point. The gap between Swift and Objective-C widens a lot with unoptimized code. Sometimes comically so, 10x isn't unusual and I've seen 100x and 1000x. This of course means that optimized Swift code is a dance on the volcano. Since optimizations aren't guaranteed and there are no diagnostics, your code can turn into a lead balloon at any time.

And of course debug builds in Xcode are compiled with optimization off. That means for some code either (a) the unoptimized build will be unusable or (b) all those optimizations actually don't matter. See "The Death of Optimizing Compilers" by D.J. Bernstein.

Anyway, you asked for some links (without providing any yourself):

https://github.com/helje5/http-c-vs-swift

https://github.com/bignerdranch/Freddy/wiki/JSONParser

"Several seconds to parse 1.5MB JSON files"

https://github.com/owensd/swift-perf

But really, all you need to do is run some real-world code.

You also mention looking at the assembly output of the Swift compiler to tune your program. This alone should be an indication that either (a) you work on the Swift compiler team or (b) you are having to expend a lot more effort on getting your Swift code to perform than you should. Or both.


Did you compare it to a roughly equivalent Objective-C program? The top four entries in your profile data show retain, release and objc_msgSend. You will have those in Objective-C too. Maybe to a different degree? That's also why I am asking whether you have similar Objective-C code to test. Or maybe it's just that unoptimized Swift code is slower and optimized code is faster?

Compile times are a related but different matter. There are -warn-long-function-bodies and recently -warn-long-expression-type-checking which are really helpful and can give you an idea where most of the compile time is spent. In my experience, the type checker can spend a lot of time in mildly complex expressions involving overload resolution, which can be really annoying but there are often ways around it. With those culprits being eliminated, I have never encountered 5 minute build times for projects of this size or bigger, and I like to imagine that I write fairly generic code.


Thanks, I didn't know about those flags! I suspect there was a pathological expression somewhere, since most of that time was spent in a single file.


I can't speak for other people's code, but I regularly profile and read the machine code generated by the Swift compiler, at least for my hot loops. If you know what you are doing (use the right annotations and optimizer flags), even fairly generic code often compiles down to something that comes very close to what an optimizing C compiler would generate. Sometimes it generates even faster code, because the Swift calling convention can make better use of available registers. Sure, there are situations where it generates less optimal code, but generally generic idiomatic Swift is on a different level than a (non-profiling) compiler for idiomatic Objective-C can ever come close to. That's my experience.


> very close to what an optimizing C compiler would generate.

Hmm...with Objective-C, I get exactly what an optimizing C compiler "would" generate, because that's an optimizing C compiler generating it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You