I have been warned that LINQ is often inefficient compared to raw C# with the same functionality -- but I have my doubts as the MS C#/.net put quite a lot into LINQ efficiency under the hood. Did you test this against any raytracing setups that don't use LINQ?
LINQ isn't really inefficient as such -- that's not really how I would put it. When used correctly, the performance is pretty good and it buys you elegance and a massive amount of expressive power with relatively little performance tradeoff. I use LINQ everywhere in my code and the maintainability gains are palpable.
It does have overhead, but it shouldn't be avoided for that reason unless you're really trying to squeeze performance gains out of your code. Eric Lippert, the designer of LINQ, has this to say [1].
Though my understanding is that if you do something like this:
var a = new double[10000];
var b = a.Select(x=>x*x).ToArray();
I'd expect LINQ to create a List or a similar structure when executing the select, then copy the output to the final array. If you were operating on two arrays directly, I would expect less memory allocations (unless these things get optimized away).
While your comment is true, I'm not sure how it's relevant. `ToArray` is still operating on the `IEnumerable` produced by Select, which doesn't tell you that the Select is itself operating on an array.
The CLR does do some type checking to try to propagate the size if it can, but that's by no means guaranteed.
Edit: In skimming the source available on sourceof.net, it looks to me like the array doesn't actually get propagated far enough, so the construct in GP will in fact incur several unnecessary array copies.
It's `ToArray` itself that's the problem. Since it doesn't know how big the array will be in the first place, it has to allocate a small array, start iterating, copy everything into a bigger array each time it runs out of space, and finally allocate an array of exactly the right size and copy everything into that.
You can see the `ToArray` implementation at [0], which defers the implementation to [1]. The implementation checks for ICollection to get the exact size, but the type [2] that `Select` returns doesn't implement ICollection, so `ToArray` has to fall back to the less efficient algorithm.
Oddly enough it seems that the certificate just happened to expire today in between the posting of my first and second comment. I didn't mention it only because I'm at work and couldn't be 100% certain that it's not my organizations security service messing with things.
Of course I'd never use LINQ inside a tight loop in the rendering path of my Unity3D projects, that's not what it is built for, but in my setup and project management code it is a god-send for expressiveness and maintainability.
LINQ doesn't optimize that much. If you're using LINQ on a database, you may benefit from the underlying database and driver's optimizations. However, if you're just operating on lists and in-memory data structures, you are probably eating the overhead.
As an example, a .Where(...) on a List<...> object is exactly what it claims to be: a linear pass through the list. Sometimes that's exactly what you need. Other times, it's more elegant and fast enough to represent an operation as a list comprehension. Finally, sometimes you really do need to eke out more performance. It's only the last case where LINQ is a bad idea.
> As an example, a .Where(...) on a List<...> object is exactly what it claims to be: a linear pass through the list.
Yes, but it uses deferred execution, which means that it returns enough information to perform the filter, it doesn't really do much until you start to enumerate the list. If, for instance, you continue to do a Take(5) on the result, the filter is only applied until it finds 5 elements that satisfy the filter (or until there are no more elements).
The thing about Linq efficiency is that under the hood it’s setting up a state machine with enumerators and continuations to be fully general. It’s about as fully-optimized as it could possibly be given the design constraints, but it will always be less efficient than just iterating over a collection directly.
But the worst performance hits in Linq come when people don’t know how to use it. The biggest culprit is unnecessary materializations, e.g. ToList() or anything that requires traversing the entire enumerator like All(), so if you’re careful to avoid the obvious pitfalls you should be fine.
That's... not really true of LINQ, specifically. I have seen this assumption from developers before, with regard to LINQ-to-objects, that 'the compiler does clever stuff to optimize LINQ expressions', but... no, not really. It compiles a series of method calls. The JIT might do some optimization. But it's not a query optimizer - it just does what you tell it, and underneath it all there's nothing but an enumerator.MoveNext() getting called repeatedly.
Setting up state machines and continuations is something C# does when you write generators (with yield) and async methods (with await), but LINQ doesn't do that, particularly.
LINQ the language feature in particular doesn't do any of that sort of thing. As a language feature, LINQ is just the 'query comprehension' syntax, which is the 'from x in y where bar select z' coding form. And from the language's point of view that is just a different way of writing y.Where(x => bar).Select(x => z), which it will then try to compile. If it happens to wind up calling generator methods which implement state machines and continuations, or just simple methods that return ordinary objects, LINQ doesn't care, so long as the types all line up.
The only other thing that confuses matters with LINQ is that those lambdas it wants to compile can match method signatures that have delegate types, or Expression<> types, in which case the lambda doesn't get compiled as executable code, but rather as an expression tree literal, which is how Linq to SQL and friends were built (but again, that's not a 'LINQ' feature - you can assign or pass a lambda as an Expression<> yourself without involving 'LINQ'). And THAT is where LINQ gets a lot of its worst reputation, because that turns out to introduce a ton of complexity and failure modes that aren't the fault of the C# language, except in as much as it made it possible for a library to get involved in interpreting your code's syntax tree at runtime, which may have been a mistake.
The Go debugging integration with Visual Studio Code is built on top of the Delve debugger, but exposed through the integrated debugging UX of VS Code. We're working with the Delve developers on a few things to help make this experience great - including Windows support (https://github.com/derekparker/delve/pull/276).
Mostly unrelated - but this reminds me of a project I did years ago to put an entire raytracer in a single C# LINQ expression.
https://github.com/lukehoban/LINQ-raytracer/blob/master/READ...