DEV Community

Cover image for If you can't maintain it, don't build it

If you can't maintain it, don't build it

Graham Trott
Software Engineering relic with a keen interest in making programming more accessible to ordinary people.
Updated on ・4 min read

The Titchmarsh Effect

Readers in the UK will be familiar with Alan Titchmarsh, a TV celebrity gardener whose shtick is to perform makeovers on unsuspecting victims' gardens while they are lured away from home by some suitable subterfuge. In the space of a couple of days, Alan's team transforms a wilderness into a show garden, constantly up against impossible deadlines and unexpected difficulties. When the victim returns, he/she is always delighted with the result (or at least pretends to be). It all makes great TV.

The thing is, though; if the owner had been a keen gardener it would never have been a wilderness in the first place. And if they had wanted a show garden and could afford a gardener they would surely have done so. So the question is: Who is going to look after this garden?

It is comparatively rare for the programme to revisit these gardens years later, but I fear that if they were to do so, in the majority of cases we would find them rapidly reverting to their original state.

I hope the parallel with software is obvious. Websites have long lives so when we build one it is for the long-term. Before we start we should consider who is going to maintain it. The golden rule is

"If you can't maintain it, don't build it."

Maintenance and TCO

As with TV garden makeovers, interactive websites are often built by very highly skilled teams. A small proportion of site owners have an in-house team that holds onto its expertise over a long period, but far more often maintenance will be entrusted to people who are untrained in the details of the software and the framework used. And in many cases there is no maintenance team at all; just the owner of the site, who will have to call in a team each time maintenance is needed.

After a couple of years, the maintainer will almost certainly not be the original developer(s). Whoever takes over, it takes a surprising amount of time to get to grips with a complex website and the temptation to cut corners is strong. Too often there is a steady degradation of the code and a rising cost of maintenance, until it is deemed beyond repair and a complete rewrite is needed.

The problem is made worse when frameworks are used. These have a short life compared to that of a major website and are constantly under threat from newer, more fashionable models. Even the same framework can undergo serious changes between versions (cf Zend and Angular) that require a complete mental reorientation on the part of the programmer asked to pick up an old version.

These problems should be anticipated at the outset and factored into the TCO (Total Cost of Ownership), but little thought is usually given to the maintenance of what is being built.

DSL as the way forward

Various conclusions might be drawn from the above, but I have my own. It is that third-generation languages (C, Java, PHP, JavaScript etc) should wherever possible be hidden from the customer, and should in particular be avoided when coding business rules, since it is here that most changes will be made during the life of the product. Instead, you should always aim to have a layer in which both the computer and the domain expert can operate. The best-known example of this is SQL, which while being easily read and written by database experts is also code that can be run by a computer. This Venn diagram shows the space in which a DSL (Domain Specific Language) such as SQL operates:

Where machines meet humans

English-like DSLs are not hard to write. To start with you need to devise an unambiguous syntax that remains close enough to English to be understood by any intelligent person having a good knowledge of the domain. Lack of ambiguity is the first prerequisite for a computer language so here we have the meeting ground.

I speak from experience here, having built DSLs in Java and in JavaScript over a space of 20 years. They may not have been the most efficient programs ever written, but as time goes by this matters less and less. The sheer power of JavaScript on modern hardware is remarkable; even my relatively inefficient compiler can process between 3 and 10 lines of script per millisecond (this is just while compiling; runtime is way faster). And since a line of script often stands in for a process of substantial complexity, a script is usually much smaller than the JavaScript that would be needed to do the same job.

The effect of this strategy is to divide the programming effort into system programming, where any available technology will be fine because once built it will rarely be changed significantly, and user-level coding, done with a DSL whose vocabulary matches that of the domain. When the framework (if used) goes out of style it no longer matters because in a carefully designed system all changes will occur at the user level. So the product has a far more assured life-span, and because it is no longer dependent on high-level skills, mistakes are fewer and the cost of maintenance is lower.

Author's credentials

Note: The JavaScript DSL I refer to above is EasyCoder, implemented as a WordPress plugin and published in the WordPress plugins library. Examples of its use can be found at EasyCoder Software and its source code is on GitHub at EasyCoder Source.

Photo by Michal Mrozek on Unsplash

Discussion (0)