I've been delving a bit into the functional language Haskell recently, and I discovered that it has a somewhat unusual way of handling function parameters. Usually, you supply the arguments and call a function, and that's the end of the story.

For example, the following trivial JavaScript `sub`

function just subtracts its two arguments:

```
const sub = (first, second) => first - second
```

We can call it as follows:

```
sub(7,2)
```

Let's write `sub`

in Haskell and find out how it is different from the JavaScript version:

```
main = print (sub 7 2)
sub :: (Num a) => a -> a -> a
sub first second = first - second
```

Let's see the result:

```
C:\dev>ghc sub.hs
[1 of 1] Compiling Main ( sub.hs, sub.o )
Linking sub.exe ...
C:\dev>sub.exe
4
```

This looks as though it's the same function. The signature seems to be saying: Take two numbers as parameters and return a third number as a result. However, notice how there are no parentheses in `a -> a -> a`

? One might expect something more like `(a, a) -> a`

. That's actually a clue that something slightly different is going on.

Below I've tried to come up with a way to show this:

```
main = print finalresult
where finalresult = partialresult 3
partialresult = sub 7
```

If we modify the main function as above, we can see that calling sub with just a single argument, `7`

, returns a function. We call this intermediate function with `3`

, which then returns `4`

, the actual result of the subtraction.

Each time a function is returned, it retains access to the parameters that were passed in to its calling function. Functions that can retain enclosing scope like this, even once execution has moved out of the block associated with that scope, are called

closures.

So, what is really happening then? In fact, the `sub`

function takes a single number as a parameter and returns a function. That function also takes a number as a parameter, and returns the result of the subtraction. This idea of decomposing a function that takes multiple arguments into a nesting of functions where each function just has one argument is called *currying*.

Let's try to simulate this behaviour with JavaScript:

```
const sub = first => {
const intermediateResult = second => {
return first - second
}
return intermediateResult
}
```

Here's how we'd call this type of function in JavaScript:

```
const result = sub (7) (3)
console.log('subtraction result = ' + result)
```

We call `sub`

with `7`

as an argument and then we call the function that it returns with `3`

. This intermediate function is the one that actually computes the difference between the two values.

In Haskell, currying is built into the language. Any function in Haskell can be called with partial arguments, and the remaining arguments can be applied later.

Is currying useful?

```
map (+3) [1,5,3,1,6]
```

In Haskell we can just call the `+`

function with a single argument, `3`

in this case. `map`

then then calls the intermediate function with each of the items in the list as parameters.

In something like JavaScript, we can't do this directly, but we can get around the problem easily enough with a lambda:

```
[1,5,3,1,6].map(x=>x+3)
```

While currying doesn't strike me as being essential to functional programming in the same way that the concepts of first-class functions and closures are, I have to admit there is a certain orthogonality and conceptual purity to the way Haskell handles arguments.

In particular, currying fits in well with the fact that most everything in Haskell is evaluated lazily. In that context currying makes some sense, since most functions evaluate to a *thunk* anyway and the underlying logic is not fully processed until a complete result is required.

If you're interested in learning more about Haskell, I highly recommend getting started with the tutorial Learn You a Haskell for Great Good!.

## Discussion (4)

Nice article! At the beginning of the article you are mixing several concepts though (at the end some of it is cleared up):

These three concepts are independent but can be used together as you can see in Haskell.

Perhaps historically, when first introduced, currying added value to a language. The same concepts can now be applied in any language that allows lambdas, without the misleading syntax.

I would tend to agree. Originally currying seems to have come more from the math/logic side of things, where it was useful to show from a theoretical point of view that more complex models of computation could be reduced to the much simpler lambda calculus. It also appears to come into play in type theory and category theory.

In terms of programming languages, I don't see any value in going out of one's way to use currying. I'm guessing that Haskell could probably be modified to forgo currying altogether, but since lazy evaluation is such a fundamental aspect of the language, it kind of makes sense to me that they decided to apply currying to function calls too: It feels consistent with their overall approach...

Here's an example of how I think it does kind of make sense in Haskell:

`numUniques`

just returns a thunk which you can then pass an actual list to. Note how in the body of the function the list that's passed in as a parameter is never even mentioned!`nub`

filters out duplicate values from a list and`.`

does function composition, so the above code is equivalent to:There's kind of a minimalist elegance to the first version, but on the other hand the second sample seems to be more clear about what is going on! Maybe it’s worth noting though that the code is equally lazy in both cases.

Hello, this is an interesting and non-typical article. I enjoyed reading it.

However, I would not agree that Haskell can forgo currying nor would I agree that currying is not fundamental to functional programming.

Lambda calculus defines all functions (abstractions) to take exactly one input and produce exactly one output. Thus, currying is a fundamental property of any lambda calculus and the only tool it has to work with functions taking multiple arguments.

Since Haskell is built on lambda calculus and

issemantically a typed lambda calculus itself, currying is not really an optional feature, but rather a fundamental property of the language. As you mentioned in the article, all functions in Haskell are single input functions (hence the strange type signatures). This means that it is actually impossible to apply any function without currying coming into play. For example, let's take a look at the following code:The function

`sub`

is not called using two arguments. This is what actually happens:The function is applied to the number

`7`

. This application returns a new function which is applied to the number`2`

finally returning the number`9`

. All functions (and their applications) in Haskell work like this.Taking currying away from Haskell would render it useless. You can, of course, allow functions to take more than one argument in the first place. However, by doing this, Haskell would seize to be a lambda calculus and a purely functional language. I would not say that they "decided to apply currying to function calls". Functions and function calls are where currying comes from and it is actually inevitable.

If by saying that Haskell could forgo currying you meant that currying doesn't really have any practical applications aside from providing consistent conceptual purity, I would also have to disagree :). The main use case of currying is

partial application of functions. I believe this concept is as important to functional programming as inheritance is to OOP.P.S. I realize some of the things I mentioned here are explained in the article or in one of the prior comments. I've decided to reiterate them for the sake of completeness and clarity :)