DEV Community

Frank Puffer
Frank Puffer

Posted on

Should Coding be Trivial?

This morning I listened to an interview with Leslie Lamport, a computer scientist who is also known as the initial developer of LaTeX.

One of his statements was that coding should be trivial. The complex stuff should be dealt with before coding, by writing specifications, by creating diagrams and by the use of modeling tools.

This is not the first time I hear such a statement. In fact I keep hearing them again and again. For quite a while I believed that this is the right approach to software development myself.

It just doesn't work, at least not effectively. And there are reasons for that.

When coding, you have excellent tools that provide instant feedback:

  • An IDE with syntax checking search functionality, autocompletion, refactoring tools and so on.

  • A compiler and linker or an interpreter that provide more or less descriptive error messages and warnings.

  • Debugging tools

  • Static analysis tools

  • Unit tests

These tools help you doing the right things and prevent you from creating bullshit.

When writing specificaltions or other design documents it is completely different. Your word processor doesn't care if what you type make sense. Neither does your favoutite diagramming tool. You have no feedback. As a result there will be errors in the design. These errors will only be found during implementation. (Normally the design documents will not even be updated, but that's another story.)

Now some people argue that the lack of feedback can be resolved by using modeling tools instead of word processors and diagramming programs.

There are modeling tools that seem to work well for specific tasks. Probably for relational database design. Maybe for state machine definition in embedded systems design. But I have not seen any modeling tool for general software development that comes even close to providing the benefits listed above. These things have been promised for at least 30 years and still don't seem to exist.

But what makes people so afraid of code? I don't know, but these possible reasons come to my mind:

  1. This is what I call the write-only code mindset. People see code as the final product of the software industry. In most other industries, final products are extremely expensive to modify. I agree that this can also be the case with code - if the code is a mess - but it shouldn't be that way. That's why we have refactoring and code reviews.

  2. People obsessed with design documents often don't code themselves or haven't coded for a very long time. Their notion of coding is like writing assembly instructions, maybe C or Fortran. They don't know about the level of abstraction and the expressiveness that modern languages offer with features like interfaces, modules, generics or higher order functions.

I don't argue against planning parts of the design before coding. I also understand that design documents are useful to communicate, especially with people who don't code. My point is that it is more efficient to do as much as possible in code and use external documents only if required.

Of course this only works if you take measures that prevent your code from turning into a mess. The most obvious are refactorings, code reviews, standards and of course coders that know what they are doing.

Note: These issues are less severe in agile development because the cycles are smaller but they still exist.

Top comments (10)

kspeakman profile image
Kasey Speakman • Edited

I first heard of Leslie Lamport via distributed systems. (vector clocks, Paxos, etc). I take his quote to be analogous to one of Kent Beck's tweets.

for each desired change, make the change easy (warning: this may be hard), then make the easy change

He goes on to say underneath it "Sometimes when the work is hard it signals that we're doing it wrong. Sometimes it's just hard."

I've experienced this many, many times. I started coding and realized there is some hairy edge case that is going to make the code difficult. As a beginner, I would just power through that, and it would turn out to be a source of friction/bugs/support questions later. Nowadays, those cases immediately raise my suspicions that I have misunderstood something. So I stop coding and go talk to people (customers, the team) and try to get clarification (especially answering "Why?"). Then I think about different strategies and work through scenarios (verbally, whiteboard, notepad, etc.) It is surprising how many times, after coming to a better understanding of the problem, that I can simply approach it from a different angle and the hairy edge case no longer exists.

So I agree with Lamport's statement that coding should be the more trivial part. Many of the hard problems can be addressed by gaining an accurate understanding and working through solutions outside of coding. (That funny saying: "Weeks of coding can save hours of planning.")

If the coding is a lot of effort, then it should be because the problem itself is a lot of effort. Not because the mechanics of the implementation make it so.

fpuffer profile image
Frank Puffer • Edited

Sometimes when the work is hard it signals that we're doing it wrong.

But to find it out, you have to start the (coding) work. This is another type of feedback you only get when starting to code. As long as you just write specs and draw diagrams, things tend to look simple.

I started coding and realized there is some hairy edge case that is
going to make the code difficult.

Again, you had to start coding to realize it.

So I stop coding and go talk to people (customers, the team) and try to
get clarification (especially answering "Why?"). Then I think about
different strategies and work through scenarios (verbally, whiteboard,
notepad, etc.) It is surprising how many times, after coming to a better
understanding of the problem, that I can simply approach it from a
different angle and the hairy edge case no longer exists.

Actually this is exactly what I'd recommend: You start coding, find an issue, step back and think about it, maybe draw some diagrams, resolve the issue, go back to coding, find other issues and so on.

But I have the impression that Lamport suggested to delay coding as much as possible. That's what I don't understand. Because unless you don't start, you won't get much feedback.

kspeakman profile image
Kasey Speakman • Edited

I didn't hear the talk to get the full context. Do you have a link?

Nowadays, most everyone does (or wants to do) some permutation of Agile. That supports the kind of process you are talking about. (And what I was also referring to.) However, traditional engineering goes through rigorous specification and design (and testing and nowadays simulation). And during some span of time software was approached in that way. And probably safety/security critical software often still is, because the cost of failure is high. So either perspective can make sense depending on the problem domain.

Also bear in mind that Lamport didn't just make programs but helped make Computer Science itself up to this point. So he probably has an engineering background.

Thread Thread
fpuffer profile image
Frank Puffer

The quote is from this podcast.

It is mainly on TLA+, a formal specification language for concurrent systems that he developed. I didn't mention this in my original post because I don't think that many people (including myself) are familiar with TLA+.

Thread Thread
kspeakman profile image
Kasey Speakman

Ah, yes. That is a language for formal verification. Seems like kinda the same principle as TDD, but taken to the level of mathematical proof. You write all the assumptions about the inputs and outputs and side effects of methods. Then you write the implementation. Then the verifier attempts to mathematically prove your program is correct. Yeah, it seems like at that point most of the work would be in defining the assumptions.

phlash profile image
Phil Ashby

I've not heard the interview with Leslie, but I suspect his point was that at a coding level, you often 'cannot see the wood for the trees' so your next goal (aka BDD test!) is unclear, and coding without a clear direction is frequently unrewarding for the people doing it and inefficient at producing output that is valuable to the stakeholders. Thus some design work must occur on a larger scale, what matters is the way that's done and how it supports the delivery process. IMO: this is where the Agile Manifesto & Lean engineering - both aiming to keep cycle times low and efficiency up, and design techniques such as domain driven design and evolutionary architecture - both aiming to reduce the amount of coupling and up front design work required, can really help.

I like the concept of 'just enough documentation', preferably in the form of testable statements (did I mention BDD?), supported by minimal architectural information (eg: in architecture decision records) and technology information (eg: language(s), tooling, dependencies, style guides, delivery workflow) that enable a team to get on with delivering a few clear outcomes at a time, reviewing regularly with stakeholders.

fpuffer profile image
Frank Puffer • Edited

My point is not about requirements specification or high level design. These need to be documented, at least to a certain extent before coding starts. Especially if more than one person is working on the project and you have to assign tasks.

I do have an issue with lower level design outside code. To give an extreme example: I keep seeing people define function prototypes in Microsoft Word documents. I have even done that myself because I was supposed to write a detailed specification, so I tried to be as detailed as can be. This is something that,from my corrent point of view, makes no sense at all.

phlash profile image
Phil Ashby

Ah ok, my apologies Frank, I dived in :)

I too would have a problem with things like function prototypes in static documents, that does seem excessive. I think design decisions, such as the particular patterns selected to address a challenge, are worth recording, along with the reasoning behind them. This could be in a block comment in the code, or maybe on a confluence page as a picture with some explanation. As you have noted, all of this non-code work is debt to be maintained, so worth reviewing it's value at appropriate points.

alainvanhout profile image
Alain Van Hout

Planning is necessary, but in the same way that you plan a (holiday) trip: you make sure you have a clear general idea of where you want to arrive, what the most likely obstacles are (and the general approaches to circumvent then), and what the different high-level routes are that could get you to your destination (so you can make an initial decision on which to go for first). Beyond that, you have to find your way while on route and do your best to not invest time in dead ends.

As for those who only do architecture (in the narrow sense): problems are always smaller from a distance, so they don't see any of those numerous small problems that the average developer needs to work around.

robdwaller profile image
Rob Waller

Planning is always useful as it aims to discover and define what you need to do before you begin.

For example:

  • Gather requirements
  • Define rules
  • Write examples
  • Acquire materials and resources

What you're referencing is more to do with process, how you produce code. And on this level I'd agree you don't need to document things like methods before you write them. Instead write a test based on the requirements, rules and examples, write a method to fulfill the test, document the method.