I've deleted this post. It was not very well written, and has, in part because of that, attracted many negative reactions from people defending Haskell without really understanding my points. I am no longer willing to defend this post against them. In part because, as mentioned, it did indeed have a fair few problems.
For further actions, you may consider blocking this person and/or reporting abuse
Latest comments (60)
Languages are tools. Some tools are more (or less) suitable for a particular problem domain.
Pure functional programming languages, like Haskell or the rapidly evolving Elm, have an appeal as a viable alternative to OO languages. Because they are pure, they disallow the option of backsliding to OO ways.
The FP benefits I expect in an FP language:
Pragmatic functional programming languages, like OCaml or F#, compromise on the FP-ness in order to "get work done". FP purists might find them distasteful.
Hybrid languages, like Clojure (Lisp, with some FP constructs), or Scala (primarily OO, with some FP constructs), can be a good way to get some FP on their respective platforms.
And then there are folks that try to promote Lisp, C++, or Swift as functional programming languages, merely because they have some FP-isms or FP-ishness. Which just makes me want to tell them, "Please use F# for a couple months. Then you'll understand what a FP language is, and why (non-FP language X) is not an FP language."
I've programmed in Lisp for about 3 years. I've programmed in F# for a year. I can't imagine that anyone that has programmed in any of the ML family of languages would describe Lisp as an FP language. Lisp is more powerful than that; I'd describe Lisp as a programmer's programming language, and any other language falls into The Blub Paradox.
But that doesn't make Lisp the most suitable language for all problem domains.
I think... the future of programming languages will be FP and DSL. It will take years before FP and DSL overshadow OO languages, but that is how I read the tea leaves.
Ah, yes I've seen this! Some very interesting ideas there.
One thing I can't get behind is explicit, in-code transfer of control. I'd like to see what is computed and how/where it is computed expressed in very different stages. Though that's admittedly a rather big departure from traditional programming.
You're right, I had the wrong definition of purity and side-effects.
Nevertheless, the intuition remains. By allowing errors to be based on ⊥, types in Hask become less informative than they would be in just the Set category extended with infinite recursion (let's call it Set∞). Essentially it is much like using a Kleisli category of Set∞ with monad
Either Error
. This is very similar to what you get when allowing side-effects, which use theIO
monad instead.My confusion came from incorrectly thinking that a side-effect was anything not properly represented in the type system, and purity merely means "no side-effects".
I agree that, with hindsight, String was a mistake. But I've never understood why some people think this is such a serious problem. Just import Data.Text and carry on.
Not sure I understand this point, since you actually can force evaluation of haskell expressions.
I'm a bit confused. How does the supposed inability to forgo lazy evaluation lead to unnecessary computation?
Why can't it be both?
How so?
...which are? So far you have mentioned several non-problems (laziness, "executables", ...). No wonder there are no plans to "fix" those.
Haskell/GHC is constantly being researched on by academics and industry users, and improved/updated as a result of that. The standard got updated less than 10 years ago. Where is the resistance to change? You haven't mentioned anything relevant apart from String.
Compile time is a property of a specific implementation, not a language feature.
I agree that compile times could be improved, but these other two seem a bit unsubstantiated. Especially when there are tons of benchmarks out there that show GHC being mostly competitive with C/Rust, and outperforming several other GCed languages. Not that benchmarks are in any way reliable evidence, but then where's yours?
It's not serious, more like a pimple on an otherwise stunning face. Unfortunately there are a few more. Alternative preludes, like protolude, have pretty nice alternatives, but I expect they wouldn't work well with other packages (though I have not tested this).
This bit was about more about automated optimization and reasoning. Because Haskell is Turing-complete, there are cases where we don't know if evaluation of a redex will terminate, because it is non-strict, the programmer is allowed to make a redex that doesn't, in fact, terminate, while still expecting the program as a whole to terminate.
A simple, synthetic example:
Here,
repeat 1
can be evaluated indefinitely. Suppose we don't knowx
yet, we cannot eagerly evaluaterepeat 1
on a different thread without risking doing unnecessary work (e.g. if x turns out to be small). Worst case scenario, all your threads are repeating 1s and your program livelocks. OTOH, lazy evaluation will always terminate.There are some tricks you can pull but from what I understand there's no catch-all solution.
Personally I've come to think we're better off staying in the
Set
category (where everything terminates) and leaving lazy vs eager evaluation as a later optimization problem.It can, but various 'pragmatic' decisions, e.g.
undefined
,Any
,unsafeCoerce
,unsafePerformIO
make me believe it isn't.It isn't, I had definitions mixed up. See my answer to Emily Pillmore.
Since writing this article I've realized I was wrong about this bit. Haskell is not resistant to change. Rather it is a language by academics and for academics.
If Haskell were written for business it would be very different, but that topic is too large to discuss here, and I'm still playing around with my own compiler to test a few theories.
I am still calling bullshit on "avoid (success at all costs)", it just defines success in terms of academic adoption rather than global adoption.
:
as cons rather than type declaration,fail
in Monads... alternative preludes have more examples.Yes, but Haskell is pretty much GHC, unless I've missed something major.
In retrospect, this wasn't a good point. Performance is good enough in most cases. If we're using python and JS for application development, Haskell will be plenty fast. Not sure if spikes and MT problems due to GC are still a thing. Haskell can still do better though, using e.g. heap recycling. From what I understand memory usage is also not great.
Finally, pure FP should be able to surpass even C in practice, since the later has less 'wiggle room' for optimization on e.g. different hardware.
The idea of evaluating things in parallel to save time makes more sense in an eager language.
In a lazy language, laziness is your optimization. You don't need to optimize thunks that you never evaluate. Moreover, laziness gives you memoization for free in many cases. Generally, laziness gives you better asymptotics than eager evaluation, which instead systematically confines you to worst-case complexity.
Even dependently typed languages designed to be sound logics have escape hatches.
The PL community has come a long way, but sometimes you just need to throw in the towel and admit that you know more than the machine about your program.
Haskell programmers know that unsafe functions are unsafe, and should only be used in specific instances where it is known that they do not lead to unsoundness. Trying to suggest that Haskell is no more pure than C because of unsafePerformIO is a ridicolous proposition.
Also, what do you mean by Any?
This has stopped being true years ago, and is now factually wrong. For example, the majority of members of the GHC steering committee is from the industry.
No argument there!
Again, we are in agreement. The point is that there is a design choice here. Should the programmer sometimes jump through hoops to appease the compiler in order to gain very strong guarantees? Different situations prefer different answers.
Not saying that! Closest thing I would argue is echoing The C language is purely functional, where purity is considered a binary property, not a quantitative one. Beyond that, it's mostly a matter of trust.
It's a type in Haskell that can be cast to/from any other type. Useful e.g. when implementing an open sum.
This was more a comment about the "soul" of the language than usage statistics.
I'd like to emphasize that the point of the post, which I utterly failed to bring across, is not that Haskell is bad, certainly not compared to existing languages (it is my favorite language ATM). Instead, I'd wanted to say that its dominant position is causing people to equate purely functional programming to Haskell, but Haskell has made many choices beyond functional purity that are not necessarily pareto-dominant. So while I believe functional purity (not quite that, but close enough) is always the right answer, Haskell is not.
EDIT: fixed mistake, wrote pareto-optimal when I meant pareto-dominant.
I still have no idea what you are talking about. Afaik there are no Any types in the Haskell standard or in base, so you will have to link to it.
If you mean the Any type that you can define as an existential, like Dynamic, there's nothing unsafe about it.
I think the problem with Haskell right now is the lack of good quality tutorials. Readers trying to learn the language have only a few good options like:
learnyouhaskell and schoollofhaskell and to be honest those look dated.
Compared that to Javascript or Typescript where you can find all sorts of up-to date tutorials and expert opinions.
Also if you search in stack overflow the top questions are about fundamental things like
stackoverflow.com/search?q=haskell
If you just look at those titles you may wonder whether this language is actually used in practice or is just a toy language.
It's also really difficult to keep things simple and explain things in a concise way before the reader bails out. Personally the moment a tutorial touches terms like Monads or Category theory jargon I get lost.
Haskell is certainly one of the less popular languages out there. I think a lack of tutorials is just a symptom of that. The community does have a strong basis in math, because having that will make you appreciate the language more. I do believe Haskell has not done a very good job of translating the theoretical know-how into practical benefits. Not yet, anyway.
Juss stick yer Rust in an AWS Lambda and let 'er rip!
That's all the FP you really need ;)
~ Me, an intellectual, xD
What do u think about scala? I am currently enrolled in Martin's course and I am loving it.
Scala is not a purely functional programming language but if you use its functional feature with a functional library like Cats, it will be a great choice.
I've only looked at it briefly. I saw side effects and decided to look further. If you have side-effects, you're not doing FP.
Yes Scala gives you that option but it doesn't mean you can't do FP in Scala. It has all the features of an FP language. It also give you an amazing type system, consistent Apis for all the data structures, concise syntax and many other great features. Compile time is fast and like Haskell it does have a lot of features but you don't have to know everything to get started.
Also learning FP is more of a paradigm shift and that's probably why most of us think that Haskell or Scala are not beginner friendly because we confuse (simplicity with familiarity).
You can do functional programming in most languages. The good thing about Haskell is that it does not compromise. Scala forces you to have a lot of self-discipline to not do bad stuff.
That's correct. We might as well call C functional if that's the criterium.
Functional programming entails two things:
2 does imply 1, but let's ignore that for now.
If you don't have the guarantee of 1, there are a lot of properties that just do not hold for you program. You might call it functional style programming, but calling it functional is incorrect.
No. I have played around with HPC a bit, but nothing client-facing.
The server I'm currently working on is made to fully scale horizontally, as it should theoretically be capable of keeping up with the likes of youtube, but I haven't gotten to the point of testing capacity yet.
Previous backends I set up would usually not even get to 1k / day. Which was a bit frustrating.
Not mine, because I mostly write internal tools. But other have.
warp
, the Haskell web server has really good performance. Most of it is thanks to the excellent runtime of Haskell. I once sae a blog post where they wrote a simple webserver for static files and the performance was comparable to nginxAccording to field reports by my fellow colleagues, every naive network-bound implementation does 8-9K RPS, which is good enough 9 times of 10.
I think FP and Haskell devs are getting trapped by the same mistakes as OO, and could eventually find themselves similarly discredited. IMO there is an unhealthy (historically west coast) focus on Programming Language while ignoring the automation potential of Monads. NYC fortunately is not following this path. The smarter the Monad, the less worry about code.
AI-driven software automation e.g. Monadic or RPA-like software automation is the future and Haskell will be struggling to stay relevant in a few years. No one wants to keep coding all that plumbing just to orchestrate some lambda code.
I love Haskell. It's cool.
Probably have to find the material that suits you first, but it's much easier now.