User input is inherently statefull; user and machine interact by sharing a state. Therefore a monadic representation seems necessary (although you may be able to successfully fuck around with linear types, input getter functions and interleaving, probably not a good idea though).
Randomness is not inherently statefull. A true random function, unlike a pseudorandom, will not give two shits about when, in what order or by whom it is called. It will produce an 'equally random' result in every case.
Haskell may use IO monads, but that does not mean it is necessary or even good. I don't think quantum random number generators were in the back of everybody's mind in the late 80's when Haskell was first created after all ;-)
To me the situation seems fairly simple in that the world is still different afterward than it was before. In terms of consistency also, if a language guarantees purity (like elm or haskell), it seems a bit silly to equivocate about that.
Purity is not about state, it's about referencial transparency. A function is pure iff (if and only if) given same arguments it returns the same output.
A random generator takes no argument and returns a different result every time, so it is inherently impure.
Would you know of any practical arguments (besides memoization) in favor of referential transparency, insofar as it is lost by introducing non-determinism?
Reverseability for example. Haskell has STM (software transactional memory) which is like a global shared variable between threads, but instead of race conditions of two threads try to write at the same time, the actions get rolled back and then redone one after another. The type system guarantees that you cannot do monadic actions when writing to STM, just pure stuff. If you would have randomness in there, rollback and redoing would not be the same thing.
That's true, but sort-of isn't. The new computation "may as well" have been the old one: it is still equally correct because true randomness has no hidden state that was affected by the rollback and redo.
This is all still ignoring the de-facto pseudorandom / deterministic nature of most of our machines.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
User input is inherently statefull; user and machine interact by sharing a state. Therefore a monadic representation seems necessary (although you may be able to successfully fuck around with linear types, input getter functions and interleaving, probably not a good idea though).
Randomness is not inherently statefull. A true random function, unlike a pseudorandom, will not give two shits about when, in what order or by whom it is called. It will produce an 'equally random' result in every case.
Haskell may use IO monads, but that does not mean it is necessary or even good. I don't think quantum random number generators were in the back of everybody's mind in the late 80's when Haskell was first created after all ;-)
To me the situation seems fairly simple in that the world is still different afterward than it was before. In terms of consistency also, if a language guarantees purity (like elm or haskell), it seems a bit silly to equivocate about that.
That's only true if you use seeds, which as I mentioned is theoretically unecessary and could be abstracted away in practice.
Purity is not about state, it's about referencial transparency. A function is pure iff (if and only if) given same arguments it returns the same output.
A random generator takes no argument and returns a different result every time, so it is inherently impure.
That's an excellent point.
Would you know of any practical arguments (besides memoization) in favor of referential transparency, insofar as it is lost by introducing non-determinism?
Reverseability for example. Haskell has STM (software transactional memory) which is like a global shared variable between threads, but instead of race conditions of two threads try to write at the same time, the actions get rolled back and then redone one after another. The type system guarantees that you cannot do monadic actions when writing to STM, just pure stuff. If you would have randomness in there, rollback and redoing would not be the same thing.
That's true, but sort-of isn't. The new computation "may as well" have been the old one: it is still equally correct because true randomness has no hidden state that was affected by the rollback and redo.
This is all still ignoring the de-facto pseudorandom / deterministic nature of most of our machines.