When I first started writing about functional programming and its various benefits, I checked with Wikipedia to make sure I had the right definition of "functional": declarative (i.e. no side-effects) and based on the evaluation of mathematical functions. "Function style" was instead used to denote having functions as first-class citizens.
It was brought to my attention, however, that in academia the lack of side-effects is not always considered a necessity to call a language "functional".
Regardless of which definition is 'correct', I think it is better to be explicit to avoid future miscommunications.
It therefore becomes important to distinguish between purely functional programming and impurely functional programming.
Purely functional programming requires the lack of side-effects. Pure languages are generally typed. This makes sense, as e.g.
λa.a/7 will throw an error when a string is passed to it, as strings cannot be divided with a number. That would break the Curry–Howard–Lambek correspondence, meaning one would lose many interesting properties.
Purely functional programming languages include Haskell and Elm.
Impurely functional programming is perfectly fine with side-effects. It can be done in a fuckton of languages: JS, Clojure, Lisp, C, Scala, etc.
All my writing thus far has been strictly about purely functional programming and cannot be applied to impure languages, as they do not have the necessary properties for the Curry–Howard–Lambek correspondence to hold.
It is debatable whether the properties of purity make up for the effort of avoiding side-effects. I myself am firmly of the opinion that yes, purity solves so frigging many problems that it is absolutely worth it. On the other hand, things like displaying an output or reading user input are effects, so one always needs some kind of way to represent effects in the purely functional world.
Both the advantages of purity in practice, and how to represent effects, are subjects for future posts.