For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more thedataangel's commentsregister

I write Haskell commercially, and none of the problems you list are problems my team actually has in reality.


Your comment would be much more interesting and constructive if you actually took the time to refute the individual points instead of just saying that it's not a problem for you.


How do you deal with #4?


How is #4 even a problem to begin with?

If anything, it's the inverse: "isolation of side effects" means the architecture is more flexible and lego-like -- as opposed to a spaghetti mess.

And if you need to change interfaces, you are in for one of the smoothest rides, as the compiler will guide you every step of the way to implementing the change wherever it's needed.


> How is #4 even a problem to begin with?

Lets say you are making an RPG where you can equip items. You write a lot of pure functions and behavior for stat adjustments etc. Then the designer comes to you and say that items are no longer constants, every item needs a durability which goes down on use. In Java this would be a few line change, just add a new field in items and add a line to subtract it in relevant places. In Haskell you would need to bubble up the new item change to the global state and properly set it in each part of the code where you use items. If you aren't careful you can easily miss to properly bubble it up somewhere and the item doesn't get updated, or you might have the same item referenced in several places and now you'd have to refactor it to have a global item pool to since tracking down all places that should be updated is not feasible.


> In Java this would be a few line change, just add a new field in items and add a line to subtract it in relevant places.

And then find out that since items used to be constants all instances of a given type of item are actually the same item object, so modifying one unexpectedly affects them all. Or worse, sometimes items are copied and sometimes they're shared by reference, so whether they're affected depends on each item's individual history.

One of the benefits of Haskell is that it forces you to think through these aspects of the API in advance. A mutable object with its own distinct identity and state is a very different concept from an immutable constant and the design should reflect this. Changing one into the other is not an action to be taken lightly.


Not a Haskell expert, but I think I know what you mean -you are passing items by value to many functions and complain it's not modified in others, when you call them, or you need to compose flow so updates are properly "bubbled".

But with assumption that code was working before and you had for example function: use :: Item -> Item, and you change duration in this function, what else do you need to change to "bubble" new state? I don't get this.

BTW What's the problem with global store and passing only IDs ? I feel like this is probably valid approach and anyway ECS are implemented in similar manner AFAIK - https://en.m.wikipedia.org/wiki/Entity_component_system


As I understand it, a function use :: Item -> Item in Haskell guarantees that it can't change any property of any item;even more so if the function were use :: Item -> Effect, as could be done on the initial assumption that Items can't change.


In general you can't mutate records. Nothing prevents you from making a new one that's slightly different though. It just won't change the already existing one.

In practice this is not actually a problem. It just takes a little getting used to to get yourself out of the 'memory-address as identity' mindset that procedural languages have.


Can you provide an example (even in pseudocode, syntax doesn't matter here)?

I don't think that's how you'd handle a state change in idiomatic Haskell. What you propose sounds very error-prone.


> And if you need to change interfaces, you are in for one of the smoothest rides, as the compiler will guide you every step of the way to implementing the change wherever it's needed.

This is the same in any statically typed language. Actually the quality of Haskell/GHC's error messages are limited by constraint-based type inference and languages that have flow-based type inference do a better job IMO.


> This is the same in any statically typed language

Surely not in any statically typed language. Languages like Java cannot encode the same useful information (or rather, you cannot force them to) as languages like Haskell. Specifically, you cannot make them enforce lack/presence of IO in their types. Most mainstream statically typed languages cannot do that, in fact.


But it's considered good practice in Haskell to keep IO out of the main logic of the program and basically use it as little as possible. Haskell is certainly not an IO-focused language like Go and Rust. The IO monad is almost more of a deterrent than a tool.


That's only partially true. Of course a Haskell program needs to do IO to be useful. The IO Monad is also not a deterrent, where did you get that idea? Haskell is very much IO focused, in fact it's been jokingly called "the world's best imperative language"!

Even then, in Haskell you can say "this doesn't do IO" which you cannot in most languages!

It's also just an example of the expressiveness of the type system, which "most statically typed" cannot enforce or sometimes even express.


My understanding of automatic differentiation (AD) is that it's only really possible at the compiler level, since you need the ability to interpret and manipulate function definitions themselves. Certainly, no library would be able to offer the same level of guarantees telling you if you've done it wrong, nor the same opportunities for optimisation.


It's certainly not the case that autodiff is only possible at the compiler level. I've implemented forward mode (via dual numbers) and reverse mode (via tapes / wengert nodes) autodiff in libraries before.


notice the qualifier "really". obviously you can implement autodiff kind of outside the complier since pytorch and tensor flow exist. but those implementations constrain you to a select few compositions (please no comments on Turing completeness with just loops and conditionals). so for example if statements in pytorch are not differentiable (they might have piece wise continuous derivates) because pytorch doesn't actually trace the ast. I'm not a languages expert but outside of implementing in the compiler I imagine you'd need a homoiconic language to implement as a library.


If statements aren't really meaningfully differentiable, regardless of how you do it.

Take

    if x == 59:
        return 1000
    else if x > 59:
        return -x
    else:
        return x
How do you optimize this to maximize x, regardless of what language you're in?

It's true that you can get a derivative, but the derivative is essentially meaningless.


  if x <= 0:
      return 0
  else:
      return x
Is in the core of most neural networks today (relu activation), so it is definitely useful.


I don't understand? It's a piecewise differentiable function and you maximize how you maximize any such function: do gradient ascent where it's differentiable and compare against values at the boundary points (ie start, end of interval and points at which there's a removable discontinuity).


I don't disagree that a gradient exists. As the other commenter commented, the gradient/subgradient will usually exist.

What I'm arguing is that this gradient will not allow you to optimize anything of interest for the vast majority of programs.


Yeah sometimes this is called a "subgradient".


There is a derivative, but since the function is non continuous, the derivative will likewise be messed up (but just around x=59). You really don’t want to be climbing a gradient around non continuous functions!


The function has a derivative by some notions of derivative.

But the function's derivative can't be derived by an application of the chain rule and the know derivatives of primitive functions, which is what Algorithmic/automatic differentiation ultimately does (though it does this at run time, not compile time, since ordinary, symbolic differentiation explodes in memory for a complicated functions).

Also:

The continuous function

    int f(int x) {
        if(x > 2) 
            return x + x;
        else 
            return x*2;
    }
Is not automatically or symbolically differentiable when represented that way.


I believe a source to source differentiator could deal with all these (where well defined of course), e.g.:

https://github.com/FluxML/Zygote.jl

    julia> fs = Dict("sin" => sin, "cos" => cos, "tan" => tan);
    
    julia> gradient(x -> fs[readline()](x), 1)
    sin
    0.5403023058681398


Yeah, Automatic differentiation is essentially only usable on functions specified the way mathematicians specify functions; the compositions of a series of primitives (plus operators like the integral and the differential itself, as well inverse relations).

AD is not usable on loops, conditionals or recursive calls.

So basically, whatever way you specify your functions, you are effectively going to have DSL (within a general purpose language or otherwise) since not all the functions you form are going to be differentiable by AD (and there's some confusion between differentiable in the abstract and differentiable by the methods of AD).

Edit: actually, it's pretty easy to extend AD to functions defined piecewise on intervals to be in the class of function amenable of AD. What's hard/impossible is extending functions defined by loops or recursion.


AD on loops/recursion works fine when you implement it using dual numbers (see for https://news.ycombinator.com/item?id=20893414 an example). If you used this to implement x^n using a for loop, you would get the correct derivative (n * x ^ (n - 1)).


AD is totally possible at the library level (I did it in C# 10 years ago using Conal Elliott’s ideas), however if you want to autodiff more than an expression-only language, compiler support is useful.


Maybe this is the piece I don't understand. What do you mean by "more than an expression-only language"? What is a non-expression in a language? And what would it mean to have a derivative for your non-expression?


You can easily use expressions to create trees rather than values in a library. Eg a + b need not compute a value, through a bit of operating overloading it can compute the tree plus(tree-a, tree-b).

This “trick” does not extend to statements, however. You can’t override if or semicolon in most languages. You can encode statements as expressions, but then you have to worry about things like variable bindings on your own.


Building expression trees is not the only way to do automatic differentiation. A simpler way is to just carry the value and it's derivative as a pair:

    #include <math.h>
    #include <stdio.h>

    struct autodiff { double value, deriv; };

    autodiff just(double x) { return { x, 1 }; }

    autodiff operator +(autodiff a, autodiff b) {
        return { a.value + b.value, a.deriv + b.deriv };
    }

    autodiff operator *(autodiff a, autodiff b) {
        return { a.value * b.value, a.deriv*b.value + a.value*b.deriv };
    }

    autodiff sin(autodiff a) {
        return { sin(a.value), cos(a.value)*a.deriv };
    }

    int main() {
        autodiff x = just(.1);
        for (int ii = 0; ii<4; ii++) {
            x = x*x + sin(x);
        }
        printf("value: %lf, deriv: %lf\n", x.value, x.deriv);
        return 0;
    } 
There is no need to differentiate the for loop or the semicolons. This way is not doing symbolic differentiation. It's implementing the differentiation rules in parallel to calculating the values at run time.

This generalizes to partial derivatives for multivariate functions too:

     template<int dims>
     struct autograd { double value, grad[dims}; };


This only works if you don't have x as a value determining the length of the for loop. If you try to take the derivative of x^x using a loop multiplying x times, you'll get a wrong answer (admittedly, expecting a right answer would be foolish).

But this goes to question at hand, whether AD should be a library/DSL or whether it should be a primitive of a general purpose language. The thing is a general purpose language lets you present sorts of things as functions that can't be even dual numbers won't take the derivative of correctly - a dual scheme can't distinguish variable order loops from constant order loops.


That's a fair point (and worth documenting as part of the library), but I'd be much more likely to implement pow(x, x) carefully than switch to a different language or compiler.


I appreciate the library approach and I basically agree with you. If anything, I wanted to point out that a normal general language is so general that adding differentiation into it would be highly costly - just this feature would require that every loop and every conditional return be watched (relative to pow(x,x), an even more challenging example is the Newton's method implementation of a continuous function using a loop).

An alternative would be creating a general purpose language just for this feature - an interesting though rather specialized project.


I'm not certain, but I suspect you'll hit the halting problem if you tried to be completely general purpose.


This is actually huge. I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language.

This may let them capture a large chunk of the ML market from Python - and hopefully greatly improve ML apis while they're at it.


Huh? Nobody is writing numerically intensive libraries in Python. Clearly this language proposal is taking aim at C++ and Fortran. Even if this caused TensorFlow & others to rewrite everything in Swift, people would write Python bindings to it and keep using Python.

I'll get excited if Apple actually merges this into Swift. It's a niche feature that their compiler team will need to maintain forever. I actually have been working on algorithmic differentiation in C++, so it's not even that I wouldn't want to try Swift out if it actually made it in. However, because this sort of thing is of such narrow interest I believe the future will stay with embedded DSLs / libraries / ugly macro/template hackery.


lol this is literally by the group that's rewriting tensorflow in Swift https://www.tensorflow.org/swift so you're off on the intention here in that it is exactly taking aim at python as the main data ecosystem language.


> I saw a proof of concept of something like this in Haskell a few years back, but it's amazing it see it (probably) making it into the core of a mainstream language.

Probably Conal Elliott’s work, eg in Vertigo (http://conal.net/Vertigo/, circa 2005)? There he was using it for normal computations used in pixel shading, pretty cool stuff. He is still active in this field, and has a lot of new papers that are more ML focused. I do wonder if “general” AD support will be useful for computer graphics as well as ML?


It was Conal Elliott's stuff, though it was a more recent paper - Compiling to Categories, if memory serves me.


The key is that there is a distinction between "not statistically significant" and "statistically insignificant".


Not an issue in practice. Most students choose degrees that have real world demand.


Starbucks is next to non-existent in Australia. Partly it's because there were already established local chains when they tried to move into the market, but mostly because you can walk into almost any cafe in the country and get a much better cup of coffee at a comparable price.


Next to non-existent? They have several stores in each of the major cities. Is it the same number of stores as in Seattle? No. But near non-existent is way off.


> you can walk into almost any cafe in the country and get a much better cup of coffee at a comparable price.

This is also true in the United States and yet here we are.


It wasn't when Starbucks started. There used to be a real dearth of quality prepared coffee in America, and Starbucks was often the first place in many towns (not talking NYC or LA or San Fran or Chicago) to get a decent cup of coffee.

I don't know what to take from that though with regard to what y'all are arguing about, I can't really explain the economics of it all. But I think it's a fact that at the point Starbucks was expanding to be national, it was not true that most places it was expanding to had plenty of places where you could get a much better cup of coffee at a comparable price. Starbucks actually brought "coffee culture" to most of the U.S. (if it wasn't them it would have been someone else, this was when "foodie" culture in general was becoming a thing, people with enough money to were becoming more interested in 'gourmet' everything).


Well, I agree with you, but it doesn't contradict my point.

The fact that now, in 2018, it's possible to easily get high-quality coffee in Australia is not a complete explanation of why Starbucks is not popular there, since the same is true of the United States.


Well, the argument would be that Starbucks got it's national market share when that was not true in the U.S., but does not have that opportunity now in Australia. shrug.


Java's types become incomprehensible once you start trying to do anything complex with generics.

Haskell _allows_ you to write some true type monstrosities (cf. Lens), but almost all the useful instances of that are wrapped in libraries. Types in app code are typically very readable and expressive.

My main complaint with Haskell's type system vs Java's is actually that Haskell has too few type annotations. The inference is good enough that you usually don't need anything besides the function header, which can make it harder to read code without an IDE if you don't know what types certain functions have.


What is the state-of-the-art for Haskell IDEs? The last time I was doing Haskell in anger, I was a college freshman using GHCI and, I think Geany, which was slightly a nightmare. I would expect that there ought to be some pretty powerful stuff nowadays.


I personally use VSCode with the Haskell IDE Engine as a backend.

It's definitely not as nice as something like Visual Studio or IntelliJ, but it's not too bad.


Standard coding practice is to write type signatures for functions.


I'd probably rephrase "not" to "less". Haskell is sufficiently unlike other languages that a lot of knowledge/skills that are usually transferable between languages don't apply to it. That means you need to (re-)learn how to solve those problems in Haskell if you want to use it.

The end result is _usually_ better (e.g. Lenses are in almost all cases an improvement over getters and setters; Functors, Monads and Traversable are an improvement over imperative control flow), but damn it it doesn't take a while to get your head around those concepts.


yeah it sounds like it. One day I’ll actually dive in and learn that stuff.


Your second and third points are just wrong.

The community is one of the more helpful and responsive ones I've come across. You can generally jump on IRC and find either the people who wrote the stuff that's tripping you up (e.g. Ed Kmett for Lens and MTL), or people who know that stuff backwards and are quite happy to help (e.g. Tony Morris).

There are high quality libraries for _most_ common problems. A lot of them are vast improvements over what you'll find in other languages (e.g. Aeson for JSON serde & manipulation). Stuff like Amazonka often runs ahead of Amazon's official libraries, because it compiles directly from their API spec.

Are you going to run across holes? Sure. My team maintains a bunch of open-source libs for where we've found them. If you just want to do connect-A-to-B programming, Haskell is not for you - occasionally you're going to have to go and implement something yourself.


Haskell's type system is _worlds_ better than Java's, and it's absolutely essential to the maintenance benefits that Haskell gives you over other languages.


Just curious, but care to give a practical example of how it's worlds better?


Practically speaking, Haskell's type system gives you a lot of code for free.

Stuff like "toString", "equals" and inequalities that you'd usually implement manually in something like Java are done for you automatically by the compiler, with a one-line directive.

That system is extensible as well, so for example you can automatically get Serde code for stuff like JSON, Avro, Protobuf etc with a one-liner.

On a more abstract level, there's a lot of stuff you can express in Haskell that's difficult or impossible on a technical level in Java. For example:

- Functions which are polymorphic in their return type, so that what they do is determined by what type the caller wants them to return.

- Function constraints - for example try to express "A function which takes two polymorphic arguments, which must both be of the same type, and which must be orderable (i.e have <, ==, >, etc defined for them), and returns the same type" in Java. In Haskell, that's just "f :: Ord a => a -> a -> a".

- Higher kinded types. These let you have (loosely speaking) polymorphic containers. For example, instead of List<A> and Set<A>, in Haskell you'd have something like Traversable t => t a, where Traversable is a particular interface you want your "container" to implement.


You need some better examples (your examples all appear expressible in Java):

> - Functions which are polymorphic in their return type, so that what they do is determined by what type the caller wants them to return

You'll have to be more specific here - polymophism in the return type is clearly trivial w/o specific constraints, e.g. <T> T identity(T x) { return x; }

> - Function constraints - for example try to express "A function which takes two polymorphic arguments, which must both be of the same type, and which must be orderable (i.e have <, ==, >, etc defined for them), and returns the same type" in Java. In Haskell, that's just "f :: Ord a => a -> a -> a".

<T extends Comparable<T>> T f(T a1, T a2)

> - Higher kinded types. These let you have (loosely speaking) polymorphic containers. For example, instead of List<A> and Set<A>, in Haskell you'd have something like Traversable t => t a, where Traversable is a particular interface you want your "container" to implement.

Iterable<T>

If you're arguing that the Java version requires the types to declare that they implement Iterable, Comparable, etc, I think that's more a philosophical distinction about programmer responsibilities rather than a technical type system difference.

Haskell does have a fancier type system than Java, so you can actually present some interesting differences. However, my belief and observation is that for many smart programmers, Java's type system is already "too fancy", i.e., too hard for many people to use effectively. So I'm somewhat skeptical of the extra value brought by Haskell's type system.


I think one distinction is that typeclasses let you add some new 'interface' like Fooable and then implement them for standard types like String and Int as well as allow the user of the library to implement a version as well. With generic programming like Shapeless in Scala one could also recursively derive a Json decoder based on having Decode defined for all the components types of some nested arbitrary data object.

A lot of this stuff is possible in other languages for varying definitions of 'possible' which usually comes down to how much code you have to write to do this, how flexible it is with code you haven't explicitly written (i.e. standard prelude types or what have you) and how much the ecosystems that exist in these languages are oriented in a manner which takes advantage of these features.

Consider nullability in Java and how pervasive it is. Java has tons of great software but the type system is lacking based on how many functions could return null but don't document it, could throw an exception but don't document it, etc... This is predominantly because actually handling nullability on the type level, or error handling on the type level is not straightforward in Java.


> You'll have to be more specific here - polymophism in the return type is clearly trivial w/o specific constraints, e.g. <T> T identity(T x) { return x; }

Consider a function "decode :: Read a => String -> a". What this returns (and what it does) is dependent on the type that the caller expects.

> <T extends Comparable<T>> T f(T a1, T a2)

Java may have improved this somewhat since I last used it, but the general complaint from Haskellers on this is how difficult it is to say that types must be equal. Consider the fact that java `.equals` is implemented in terms of `Object`, so there's no requirement that the argument be of the same type (or even of a comparable type) to the originating object. Contrast to Haskell's `==`, which can only be called with the same type on both sides. (There is no concept of referential equality in Haskell, so no equivalent to Java's `==`).

Also, I think you'd struggle with that extends trick once you started getting more complicated constraints. e.g, try something like: "T is traversable, A is orderable and serializable to JSON, and T<A> is a monoid".

> Higher kinded types

This probably gives a better explanation than I can be bothered writing: https://stackoverflow.com/questions/35951818/why-can-the-mon...

The TLDR is that there are some concepts involving higher-kinded types (such as Monads) which are simply inexpressible in Java's type system.


Sum types, not having null in every type, multiple dispatch (useful for numbers) and higher-kinded polymorphism (e.g. X can also be a type parameter in X[T]) are at least four very useful features that Haskell has and Java doesn't. Haskell also has excellent type inference that makes it as succinct as Python despite the static types.

In practice, despite the features Java can support, it's typically too much work to define a new type for safety reasons. Strings are used everywhere, instead of more domain specific types. Even the designers made compareTo return an Int!


So much this. I'm not actually that much of a fan of Haskell (might be due to a bad introduction to it), but dabbling in F#, I'm constantly amazed by how much more robust (and self-describing) you can design your APIs while still being way more concise than when using Java or C#.

Since Haskell's type system is still more powerful than what is achievable with the .NET CLR, I'd assume this effect to be even stronger there.

It's really amazing how far sane language defaults can get you. In C#, I encounter badly designed types all the time. The main reason for this is not that it is impossible to design them right, but that the language makes it so tedious. I'm also really longing for the new features in C# 8, in the hope that they manage to remedy some of that (record types, pattern matching and non-nullable references sound like a good start).


As a c# person, Sadly the pattern matching will not do exhaustiveness checking which makes adding new data to cases much more error prone than without it, so there is less incentive to move away from inheritance


I suspect there is a point of diminishing returns with this though; the extra complexity in the type system gets you less and less benefit but at a greater cost to code complexity/comprehension. It could explain why the mixed Scala/F# paradigm has more industry usage than Haskell right now. i.e. I am trading a type system that has a little more expressiveness for a large amount of libraries, ecosystem, rock-solid VM, testing, corporate buy-in, etc. In a .NET shop for example its much easier to just drop an F# DLL into your codebase (in VS create a new F# project in a few clicks); there's no need to change anything else (e.g. build machines, build tooling, testing tools) since it's all IL anyway; same goes with Scala and Java.

My experience in coding functionally has shown that mutation (especially if localized and doesn't leak outside of a function) can really help functional code become a lot faster. From what you describe C# is slowly becoming an F# clone; which may be a good thing or not. Still think that if you need these features you should move to Scala/F#/Ocaml/Haskell though since I suspect these features will be added to C# in a clumsy/compromise way given its roots (e.g Scala and F# already have exhaustive pattern matching, async streams/iterators, better type inference, etc.).


> mutation (especially if localized and doesn't leak outside of a function) can really help functional code become a lot faster

There's this magical thing in Haskell called the "ST Monad", where you can have a pure function (does not have IO in its type signature, does not use unsafePerformIO) that takes a normal value, returns another value, but can use mutation inside the function, and the type system guarantees that the mutation doesn't "leave" the function. So for those cases where, in C++ I would think "this function is pure enough, it doesn't print or order fish food, though it does mutate this temporary array", Haskell's compiler will actually confirm for you "yeah, you're right, it is pure enough".


> Still think that if you need these features you should move to Scala/F#/Ocaml/Haskell though since I suspect these features will be added to C# in a clumsy/compromise way given its roots

100% agreed, with one caveat: You usually don't get to go "Boss, we're using F# now, 'kay?"

Some workplaces are sadly very constricted in those terms. Maybe I'll see the day when my team gets to use a more functional language, but until then, I'll take what I can get.


Type system can prevent classes of mistakes.

One such example in my experience is modeling (and designing) hardware. Strict checking of lengths of bit vectors (just bit vectors!) will make mistakes like "incorrect interpretation of memory address" much harder to be made. Add another one: the "ready" bit for some bus values can be described as Maybe type. Pattern matching on value of Maybe type makes impossible to access bus values when "ready" bit is down (Nothing case). You get rid of another class of mistakes.

For this to work in more or less usable way, you have to be able to specify in type level equalities like "concatBitVectors :: (rlen ~ Add alen blen) => BitVector alen -> BitVector blen -> BitVector rlen". This is just not possible in Java. It is possible in C++, but C++ lacks algebraic types like Maybe and more. It is possible in Scala, but Scala has its own pecularities (modifiable variables are absent in hardware).


It is not comparable. Haskell's type system is a qualitatively different thing from Java's, with different use cases (it can be used the same way as Java's, but people just don't).

Java's types don't help you organize your code, don't help you make it more concise, and enable only the shallowest kind of code generalization. They are not expressive enough to prove correctness, they mostly do not help you hiding information away from the developers focus, and they mostly do not allow you to restrict code into their responsibility.


I feel like it's hard to give a concrete practical code example, but maybe I can describe my practical experience.

The main difference I've experienced is that while java's type system forces you to be explicit about your types in a lot of places – like implementing interfaces, assigning types to variable declarations, and just in general the fact that "type inference" is limited to expressions – you get very poor checking of those types, since a lot of their complexity is hidden in the object hierarchy. This in turn, is due to java's orgins as a object oriented language relying on subtyping and inheritance for polymorphism (generics improved somewhat on this.) The result is that a lot of the complexity that haskell gets accused of, ends up in complex design patterns and/or object oriented design principles in java. This has gotten better as java has loosened some of its initial restrictions (e.g. java 8 allowing first-class functions.)

Haskell on the other hand was designed with parametric polymorphism (generics in java terms) in mind and as a functional programming language with first-class functions and no object orientation, allowing it to reap more of the benefits from the more theoretical research into type theory and category theory (yes, this gets us into monads, but I don't think they are nearly as mysterious as the internet makes them out to be.) In my opinion this has made the abstractions in haskell a lot more sturdy and more importantly statically checkable, compared to the object-oriented design principles underpinning a lot of design patterns in java. Haskell also has global type inference, which means you can avoid explicit types in many cases, especially when prototyping small, pure functions. This benefit is somewhat lessened by the fact that haskell's error messages can get very complex, but I subjectively believe that this is due to the fact the type system is checking a lot more and a lot more of what gets checked doesn't have to be given an explicit type by the programmer.

Given this I pretty clearly prefer haskell, but I've spent most of my career writing java and php. In fact, most of my criticism of java drove me to dynamic languages at first (although I would have preferred python over php.) What got me interested in haskell was the fact that I felt that there should be a way to have the freedom and speed of developing in a dynamic languages, while having the same (or better) guarantees of a statically typed language. This what led me to read about type inference.

Now, the complexeties of haskell (or even worse, haskell with ghc extensions) hardly makes my dream a reality, but I still feel like I write a lot less types while having the compiler do a lot more work for me. What has made my dream more of a reality is actually elm[0], which has similar theoretical foundations as haskell, but only has has generics (not interfaces/type-classes like java/haskell.) It's a language in which you can just write your code without types like a dynamic language, rapidly iterate on it and then add a type signature when your satisfied with it. (That's almost what I do in haskell as well, but I can run into more complex problems.)

(This characterisation is probably colored by the period I used java most heavily and might be slighty unfair, but I also limited the description of haskell to the most basic features which it has had since the 90s.)

tldr; mmm, delicious global type inference..

[0] http://elm-lang.org


A better type system != better software.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You