Using JSON as a configuration format is a mistake.
Yes, a mistake. Not a preference. Not a style choice. An architectural mistake.
That statement tends to bother people because JSON works. And when something works, the industry has a strong tendency to stop questioning whether it actually makes sense.
JSON is an absolute success as a transport format. It is predictable, deterministic, easy to parse, and almost universal. Between machines, it is excellent. The problem starts when we decide that those same qualities also make it suitable to be written, read, versioned, and maintained by humans.
They don’t.
JSON is rigid, verbose, and deliberately limited. It does not allow comments, it does not communicate intent, it does not provide context, and it does not tolerate error. These characteristics are virtues when the goal is interoperability. They turn into pure friction when the goal is to express rules, configuration, or domain structures.
Still, we keep using JSON as a configuration format, as an improvised DSL, and as an authoring layer in systems where humans spend hours reading and writing it. Does it work? It does. But it works poorly — and the fact that we have normalized this suffering does not magically turn it into good engineering.
This text is not an attack on JSON. It is an attack on the insistence on using the right tool in the wrong place, and then acting surprised when the experience for the people writing it is miserable.
Why JSON won
JSON didn’t win because it is beautiful, elegant, or expressive. It won because it is boring, limited, and predictable — and that is an enormous advantage when the problem is system-to-system communication.
It is a small, closed format, with no meaningful syntactic ambiguities, easy to implement and hard to misinterpret. It drastically reduces the space for creativity, which is exactly what you want when two processes that don’t know each other need to exchange data without negotiating intent, context, or implicit meaning.
From a machine-to-machine communication perspective, JSON is almost ideal. It forces simple structures, does not allow weird shortcuts, carries no implicit semantics, and does not try to be clever. It is essentially a predictable envelope of primitive data. Machines love that. Compilers love that. Infrastructure loves that.
When someone says “my API speaks JSON”, everyone knows exactly what that means. There are no surprises, no creative interpretations, no “yeah, but in this case…”. There is a contract. And rigid contracts are excellent when nobody wants to negotiate anything at runtime.
As a transport format, JSON deserves all the respect it gets.
Where everything starts to go wrong
The problem begins when someone looks at this same format — clearly designed for machine-to-machine communication — and decides it is also a good idea for human authorship.
This usually doesn’t come from malice. It comes from architectural laziness. The reasoning is simple: if the system consumes JSON, let people write JSON. It sounds logical. It is also a terrible idea.
JSON requires absolute syntactic precision to express even the smallest intent. One misplaced comma, one missing quote, one poorly closed bracket, and everything breaks. There is no “kind of broken”. There is a parse error and the conversation is over.
That behavior is acceptable for machines. For humans, it is simply hostile — and that’s coming from someone who actually likes strong typing and explicit contracts.
JSON is structurally hostile to humans
This is not a matter of taste. It is structural.
JSON does not allow comments. JSON does not allow trailing commas in its canonical form. JSON does not allow shortcuts. JSON does not allow syntactic relaxation. JSON does not allow context. JSON does not allow intent — only structure.
That forces the person writing it to constantly think about syntax instead of meaning. Instead of reasoning about the domain, you are mentally validating whether you closed all brackets, used double quotes everywhere, and didn’t forget a comma. That is a high cognitive tax for a task that is already complex by nature.
When you see a large JSON file written by hand, it rarely communicates clarity. It communicates resistance. It communicates that someone suffered.
JSON configuration is a normalized mistake
JSON configuration files are probably the most common example of this dysfunction. They are everywhere, they work, and everyone accepts them as “just the way things are”. That does not mean they are a good solution.
Configuration, by definition, is something that will be read, written, reviewed, versioned, and discussed by humans. It carries intent, context, and frequently requires explanatory comments. JSON offers none of that.
The result is verbose files that are painful to review in pull requests, impossible to properly comment, and full of external conventions like “this key is optional, but only in this scenario, explained in a README somewhere else”.
It works. But it works powered by spite.
The pro-JSON argument
“But JSON is simple. Everyone knows it.”
Yes. Everyone knows it. That does not make it appropriate. That is a weak argument.
Everyone knows assembly too. That does not mean we write business rules in it. Familiarity is not a tool-selection criterion. Fitness for the problem is.
JSON is simple to consume, not simple to produce. And that distinction matters far more than it seems.
Authoring formats exist for a reason
When you use TypeScript, YAML, TOML, HCL, or any minimally decent DSL to define complex structures, the difference is immediate. These formats were designed to be written, read, and maintained by people.
They allow comments. They allow some flexibility. They communicate intent more clearly. They reduce the cognitive cost of writing. And, most importantly, they let humans think about the problem first and the representation second.
The fact that many of these formats are eventually converted into JSON internally is not a coincidence. It is healthy architecture.
The pipeline that avoids unnecessary suffering
In well-designed systems, the flow is usually simple and predictable:
- A human writes something expressive, validatable, and comfortable (often typed).
- The machine validates, transforms, and normalizes that structure.
- JSON only appears when it needs to travel or be persisted as a contract between systems.
This separation is not bureaucracy. It is respect for human limits.
- If you are writing strongly typed configuration for a machine, TypeScript works extremely well.
- If you are writing a configuration structure that needs to be read and maintained, YAML or something similar makes sense.
- If a machine needs to send data to another machine, JSON is the perfect choice.
When you build an API, for example, you work with objects, types, validations, and abstractions. JSON is generated at the very end of the process, crosses the network, gets parsed on the other side, and immediately stops being a human concern. Nobody touched that JSON directly — and that is exactly the point.
This pattern shows up repeatedly in modern tools because it works. It respects human limitations and exploits machine strengths. Forcing JSON from the start skips this step and pushes all the cost onto the people who should pay it the least.
Hater or fan?
I still love JSON where it makes sense: transport, contracts, serialization, system-to-system communication. And I still hate JSON when someone expects me to write, review, and maintain complex structures in it as if that were an acceptable experience.
This is not stubbornness.
It is not a trend.
It is not technical elitism.
It is basic respect for the people who have to deal with the code.
JSON does not need to be omnipresent to remain valuable. It just needs to stay in the right place.
And that place is definitely not between you and the keyboard at two in the morning.
Note: all images from this post were generated by Nuno Banana





Top comments (0)