DEV Community

Jesse Warden
Jesse Warden

Posted on • Originally published at jessewarden.com

YAGNI For Types

Noticed a disturbing trend the past 3 years that I’ll often end up with too many/overly verbose types. TDD has helped remove them, but I wonder if there is a TDD you can apply to Type Driven Development?

Background on TDD for Helping YAGNI

When you practice Test Driven Development, one of the side-effects is you only write the code you need, right now, but if you need something later, you can safely add it or modify things. This is often called YAGNI: You Ain’t Gonna Need It. It’s the complete opposite of premature abstractions and “building things you _think_ you’ll need”. Premature Abstractions can actually be quite good; someone straw manned the term “Premature” in there because, like me, they were probably burned on one or more projects where overally abstracted code was a nightmare to work with. However, it can sometimes be called “basic planning and seeing obvious things coming down the pipe soon”. That said, code is not an asset, it’s almost always at a minimum 51% YAGNI, 49% writing code based on your inferences of the future.

Type Driven Development

That’s not how Type Driven Development works. You instead think about the types; now and perhaps in the future, modelling your problem domain with words that make sense for the problem, and have shared meaning by the User, Product, Design, and Tech. It has a lot in common with Domain Driven Design (DDD).

In (mostly) soundly typed languages like Elm, ReScript, Haskell, etc. you have more work to do with types because they’re more strict and unforgiving. The tradeoff is that when you are done, you have more assurance that your program will do what you told it to and not blow up (e.g. no exceptions, no null pointers, situations that shouldn’t occur don’t because the types ensure they don’t).

This also has the side-effect of reducing how many unit tests you need to write to “ensure the functions work right” because the “work right” is handled by the types. When you get more complex types, then you can start validating those functions, but it’s often significantly less. This in turn gives you more time to focus on Acceptance Tests; answering “does this application feature(s) actually work for the user?”.

Too Many / Bad Types

In TDD, you don’t look for signs of too much code because the process ensures you only build what you need when you need it, similiar to Pull Manufacturing Honda & Toyota practice; they only build cars when customers actually ask for them to avoid too much inventory of cars they can’t easily sell. However, if you start practicing Test After / Test last, you’ll seem some code is either hard to test, or duplicates logic for no discernible reason, or has a lot of abstractions that aren’t used, but still are a burden because they are harder to test.

This _clear_ prevention of the problem is what I’m looking for. You can only see it if you go the other, undesirable way; write a bunch of code, attempt to test it after you’ve written it.

So what’s the equivalent for types?

Right now, I only know of a few indications:

  • Faded code
  • the types are a pain to work with
  • type conversions

But while I’m sure there is more, what I’m looking for is how to prevent those from even happening. For now, here’s what the above signs mean and how I typically remedy it.

Faded Code

In VSCode, you’ll see variables, functions, classes, modules, and types that are faded out; this indicates they aren’t used.

Notice in the below code how the Person type is faded because it is not used in the code.

ImageThis visually highlights a problem, quickly. Like ESLint errors that yell about parameters that aren’t used and you’re like “Dude, calm down, I’m still writing the function, geez!”. This can be temporary, so isn’t always a good way to indicate that you’re using un-used types. Perhaps you’re not done typing, which is often the case.

Painful Type Ergonomics

Types that aren’t fun to work with can indicate you have the wrong types. Using the compiler, and tests, you can change them to be more palatable. The issue, again, is “How did we arrive at types that are painful to use?” That’s a pretty open ended question with a lot of potentially valid answers that aren’t actually a problem.

For example, when developers first learn about the Maybe union type, and realize it can remove null pointers, they are filled with joy, and use them everywhere. Then they quickly realize, thanks or “un-thanks” to the compiler, they’re now forced to handle 2 scenarios everywhere they use a Maybe. This can become exponentially large when they combine Maybes with other Maybes on a Record. They then learn the huge tradeoff, and decide only to use Maybes when absolutely forced too, or push to the edges like code that decodes environment variables for example. Learning about the tradeoffs of types is completely valid.

Another example is that while I may understand Phantom Types, another developer is not only confused, but doesn’t even know where to start when debugging a compiler error. The same happened a lot in my early Functional Programming years writing curried functions in JavaScript & Python, and fellow devs having no idea what was going on… then later me, because we had no types when these functions were piped together and JavaScript has bad runtime exceptions for curried functions. So developer’s familiarity with a concept plays a big role, and is contextual to where the team is at skill and preference & linting wise.

Both Angular and Elm went through a similiar path in HTML form building where many devs would heavily type the form + abstractions, but it became super hard to reason and debug when something went awry. They both, Angular and Elm, reversed course and just provided lighter abstractions over the state of a particular field and model to indicate if data was valid, and if we should show an error on the form field, and when. So the time of where the industry is at with what’s considered normal practices for particular problems plays a role.

Types I enjoy, you may not. Types I enjoyed 3 months ago, I may not enjoy now. Types the general industry promotes, my team my disagree. The types generally used for handling forms may not work for our particular context. It’s hard for me to find a way to prevent this. I say the words “Ya just have to try ’em out to see if you like them in your head” and I _immediately_ think about the Refactoring step in Red Green Refactor, but for types. Then I’m like “Wait… what’s the Red and Green steps for Types?”

Type Conversions in Gradually Typed Languages

Those form ML languages won’t see a problem with type conversions. In fact, any who are fans of Onion/Hexgonal architectures, or any type of Parsing Combinators will see this as a fun puzzle that results in extremely dependable and debuggable parsers. ML languages often provide wonderfully ways to convert types, both in the type system like lift functions for Maybes to values, as well as decoders for dealing with external data outside the type system. In Elm that’d be decoders used for JSON for example.

However, in gradually typed languages like TypeScript, Luau, and Python’s Typings, there is way more bredth and choice for the developer there which has it’s pro’s and con’s. Worse, though, is type narrowing. Narrowing types, I think, is one of the hardest type problems I’ve encountered, and one that often allows less safe code to exist because while the ROI is there, the experience level and deep knowledge of JavaScript type primitives is pretty vast. Most devs using TypeScript do not have knowledge of typeof vs Array.isArray vs Error.isError, object.hasOwnProperty vs Object.hasOwn + what you do when you can’t support later ECMAScript compilation targets. That’s a lot to ask a developer who just wants to convert a string to an Enum, or an Object to YourRecord.

For example, if I get some decoded JSON in Elm from a REST API I can’t control, nor influence change on, the data may look like this as JSON:

[
  { first_name: 'Jesse', last_name: 'Warden', age: 45 },
  { first_name: 'Albus', last_name: 'Dumbledog', age: 8 },
  { first_name: 'Pixie', last_name: 'Warden', age: 2 }
]
Enter fullscreen mode Exit fullscreen mode

In Elm, I’d create types to match that so the decoders are easy to write and relatively simple to read:

type alias PersonJSON = { first: string, last: string, age: Int }

decodePerson: Decoder PersonJSON
decodePerson = 
  JD.map3 PersonJSON  
    (JD.field "first_name" JD.string)
    (JD.field "last_name" JD.string)
    (JD.field "age" JD.age)

decodePeople:Decoder (List PersonJSON)
decodePeople =
  JD.list decodePerson
Enter fullscreen mode Exit fullscreen mode

However, the Figma comps created by the Designer don’t have a concept of first and last name; it’s 1 thing, and age is actually treated as a Float because if they’re on the 6 months+ or above, they’re highlighted a different color so the teachers can see those about to age out of a particular test. Rather then have bizarre conversions all over my UI code, I create simpler types so I can do both business logic more easily, and build my UI code more easily with types that are easier to work with:

type Person = { name: string, age: Float }

jsonToPerson : PersonJSON -> Person
jsonToPerson personJSON =
  { name = personJSON.first ++ " " ++ personJSON.last
  , age: toFloat personJSON.age }
Enter fullscreen mode Exit fullscreen mode

So nice. Doing that in TypeScript or even ReScript would be pretty frustrating, and super verbose. ReScript has improved JSON API’s coming with pattern matching, but still, in ReScript I’d default to their 3rd party libraries, and in TypeScript, Zod. Creating a PersonJSON in TypeScript from a JSON which is probably any or unknown is bad enough by hand, but at least converting from PersonJSON to Person would be easy.

However, in all 3 languages, as you get better, or more comfortable with your unit/acceptance tests and skills with the API’s… you start to question _why_ even parse the JSON into an intermediary type. Is it worth a harder to read decode so you can simply just remove the types and go directly to the types _you want_ and the types _you need_? Let’s just use Person, and get ride of PersonJSON. Sure. A new coder in TypeScript, with little knowledge of JavaScript, and no knowledge of Zod, would not. Is that just education or is there some type creation process to guide them on the right path, just like TDD does regardless of skill level?

Conclusions

Ending up with too many types, un-used types, or types that just aren’t fun to work with, or are even over-specified seems like a waste. While they may just be the sawdust that emits from making something beautiful while practicing TDD, I still feel like Type Driven Development should be done first, even before I write the first tests, even if those types are in the test file. They prevent work in not having to write certain tests, ensure I understand the requirements we have, and ensure we’re using good, relevant language, with 1 term instead of multiple, and sharing existing types we already have.

On the flip-side, I’ve found if you keep practicing TDD, the un-needed types DO get removed/deleted because your refactoring step continues to allow you to refine the design. There’s are a part of me that just feels there should be a TDD equivalent for types.

Top comments (0)