My take is that he is making the point that in many systems your data is much like documents that don’t have a schema and that is well modelled by nested associated arrays (aka maps) where a not present key is different from a present key with a null value.
I don't think what he is saying is particularly controversial its simply part of a larger point he is making. He is saying that Clojure works well where there is a lot of unstructured, incomplete or varying data. He is asserting that that is a common case where in a typed language you use Maybe<Any> for every document attribute.
His says that in Clojure you pass through what you were not looking for. I am not a Clojure programmer but what I think he is implying is that you can use the presence or absence of keys that are not erased at compile time as a way to pattern match whether a function operates on data. So I read “you have it or you don't” to mean ”run or don't run” without messing around with boilerplate type code that adds no real value in this case. Hopefully someone here can enlighten me.
If thats how Clojure works then I can believe that that is better than trying to deserialise something like a JSON document into a nested Maybe<Any> type to then pattern match and check every maybe is a particular something before doing an action.
So far so uncontroversial. Parsing JSON into types is easy if your JSON was created from structured types. It is very messy when the data structure isn't known. In typed languages we tend to side step that with libraries that reflect upon your types and then do the messy parsing without hand writing all the null checking code. When we want higher performance we use a code generator to create the parser and we don’t mind how ugly the generated code is. We think of it as a small price to pay to extract our typed objects.
This is a great answer, thank you. You're right, Clojure is well-suited to processing maps with varying keys, which by extension fits well with use cases like arbitrary unknown JSON.
It seems like the "antipattern to maintainablity" stance also only makes sense when taken in the context of this specific domain. Still controversial, but less so than the blanket statement.
What is interesting is that Rich made so many systems in C++ and one in C# and after 18 years came to the conclusion that types don't help at all and that the examples of where they help are contrived.
Back when I started OO was the big new thing over procedural. It is still the orthodoxy. Yet more and more people are coming to the the conclusion that class hierarchy polymorphism only helps in very narrow places and hurts maintainability. I worked on several large Java systems with dozens of developers and now agree that classes are over used and abused on the typical business apps I worked on. The last large scale multiple dev team system I worked on was written in Scala and found that algebraic types, pattern matching, lexical scoping and functional programming was much better for maintainability.
The talk by Rich suggests to me that I should be more open minded about types in general being over-applied much like I have come to appreciate classes were over-applied.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.