For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more znkr's commentsregister

One use case where I never want to miss it is in tests: Understanding what the differences between the expectation and the actual result are is invaluable.


A minimal diff is one where the number of edits is minimal. The context around edited lines does not count as edits, because they are matching lines. That said, minimal is definitely only a proxy, that’s why is a good property to relax.


> Even if something does align better with real numbers, it's still just x+0i

Beware, it’s not always useful to work in complex numbers, you sometimes want to do something different for reals and complex numbers. The prime example here is complex analysis. Defining differentiation is based on limits, on the complex plane there are a lot more directions to approach a limit vs just two on the real line. This has some interesting implications. For example, any function differentiable on the complex plane is infinitely differentiable.


> The equal sign should be reserved for comparisons, because that is what it means in mathematics.

This is touching on a pet peeve of mine: Mathematics and programming are similar in many aspects, but this is not one of them. In mathematics = is not a comparison, but a statement.

More generally, mathematics is about tautologies, that is statements that are always true. In programming, a comparison is evaluated to either true or false.

That doesn’t mean that there’s no room for conditionals in mathematics (one example is piecewise function definitions). But it’s not the same. Heck, even the definition of “function” is different between mathematics and programming.


That's why procedure is so much better of a name. It's not a hill I'm willing to die on but I think it's correct.


And let's use the keyword "go-to" to run a function, back to the basics basically ಥ ‿ ಥ


ok maybe not that, that was awful.


Seconded. Languages could even use "function" only for pure functions and "procedure" for everything else. Pascal uses "procedure" for things that don't return a value, but I think the pure vs. side effect distinction is more useful.


In languages that have block scope a procedure is that block. It’s a boundary but receives no input and does not return output. Functions do allow input and do return output. This distinction is clear in C lang that has both functions and procedures.

As a new language design feature procedures could be assigned to references for reuse in the same way as calling a function by name/variable.


I've never heard of blocks in block-scoped languages being called procedures. I feel like the word "block" is well understood and serves the purpose fine. In lots of other languages, things called procedures do take input, like Ruby, which has first-class Procs like you mentioned.


I agree, this is a real benefit of RAII compared to defer. That said, there are disadvantages of making the resource freeing part invisible. Not having to debug destructors that do network IO (e.g. closing a buffered writer that writes to the other side of the world) or expensive computation (e.g. buffered IO writing to a compressor) is a define plus. Don’t get me started on error handling…


> I agree, this is a real benefit of RAII compared to defer. That said, there are disadvantages of making the resource freeing part invisible.

RAII doesn't make resource freeing invisible. It makes it obvious, deterministic, and predictable. The runtime guarantees that, barring an abnormal program termination that's not handled well, your resource deallocation will take place. This behavior is not optional, or requires specifying ad-hoc blocks or sections.


When I switched from C++ to a bunch of other languages, I missed RAII initially. However, I quickly learned that other languages just do it differently and often even better (ever had to check an error when closing a resource?). Nowadays, I think that RAII is a solution to a C++ problem and other languages have a better solution for resource management (try-with-resources in Java, defer in Go, with in Python).


> However, I quickly learned that other languages just do it differently and often even better (ever had to check an error when closing a resource?).

I don't think that's even conceptually the same. The point of RAII is that resource deallocation is ensured, deterministic, and built into the happy path. Once you start to throw errors and relying on those to manage resources, you're actually dealing the consequences of not having RAII.


> try-with-resources in Java, defer in Go, with in Python

Or 'goto error8;' in C. Still RAII is much more convenient, especially for cases where you allocate a lot of interdependent resources at different time points. It keeps deallocation logic close to allocation logic (unlike, say, defer), makes sure deallocation always happens in the reverse order of allocations and doesn't force you to create a nested scope each time you allocate a resource


It’s mostly fine, until you run into memory leaks due to circles or because some part of your program holds onto the root of some large shared pointer graph and you have no idea which part. If you take it very far, like some code bases I worked with did, you discover that everything needs to be shared pointer now, because most lifetimes are no longer explicit but implicitly defined by the life time of the shared pointer that holds it.


I used to program a lot in C++ but switched to a number of different programming languages since then. Everything in C++ is this way and it’s hard to understand that things don’t have to be this way when you’re in the trenches of C++.


I distinctly remember what a breath of fresh air it was to switch to Java and then later C# where in both languages an "int" is the only 32-bit signed integer type, instead of hundreds of redefinitions for each library like ZLIB_INT32 or whatever.


Now you can enjoy native ints on C#, as they improved C# for low level coding tasks.


"Native" types like usize and isize as used in Rust I'm totally fine with.

What I got frustrated with in C/C++ is the insanity of each and every third-party library redefining every type. I understand the history and reasoning behind this, but it's one of those things that ought to have been fixed in the early 2000s, not decades later when it's too late.


I think by C++17 (14 even), it was in a very good state and since then, third party libraries are better and better. So really, a decade and a half later than you think it should have been.


Not a guru, but my take is that the actor model is one method of architecting a system to separate synchronization from other concerns. There are other ways to do that, often specific to a particular problem and with more or less separation. As always, there are many tradeoffs involved.


Go has a couple of SIMD uses. Copies is one of them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You