A definition of a good composition that is often used on Arts foundation courses is that is such that `if any element were to be missing or changed, the whole would be changed'. Perhaps it is a seascape, with a lighthouse making a strong vertical up one side, guiding the eye and placing itself in relation to the waves beneath. The situation of the lighthouse (and the waves) is one we recognize, and this is where the painting gets its power. If the noble lighthouse was a squat concrete pillbox, the picture would say something else. If the waves were an oil slick or a crowd of frisbee players, there would be still others messages in the painting.
The point is, there shouldn't be anything around that does not have a carefully arranged purpose with respect to the other elements of the composition. The artist needs to keep control of the message, and if the picture contains random bits, they will trigger unpredictable associations in the viewers' minds, and obscure the relationships between the important elements that the picture needs to work at all.
Logicians examining axiom sets face exactly the same issue. They have a much more precise term for what they mean, but this comes simply from the tighter formal structures that they make their observations and propositions within. They say that an axiom set should be
necessary and sufficient'. A necessary and sufficient set allows one to see clearly the nature' of the `universe' being considered. It allows one to be confident that the consequences one finds are actually consequences of the area of interest and not some arbitrary assumption.
In neither of these disciplines would it be necessary to remind people of the importance of keeping things as small as possible, as an ongoing area of concern. Unfortunately, the practical usefulness of our art means that people are often keen to see new functionality, which we try to construct as quickly as possible. When established, functionality becomes part of the background, and all of us, from corporate to individual entities, start to become ensnarled in our own legacy systems.
Although this may seem like an eternal unavoidable of the Programmer's Condition, one does see people breaking out of this cyclic degeneration, and from this perspective of programming as a creative art, we can describe how they do it.
The fundamental difficulty in keeping control of legacy structures, be they artefacts of the customer's transport strategy that have made it into the specification for the fixed costs amortization logic, or an ancient CODASYL indexing system that one is being asked to recreate in an object database, is time. This is sometimes expressed as
cost', but the issue is rarely cost. It is deadlines. Apart from circumstances where the misguided cryWolf!' there is no getting away from deadlines. They are a commercial reality over which we have no control. That's OK - we just think about them realistically and manage their problems rather than use them to justify poor products.
The first point of leverage against deadlines is recognizing that work proceeds in a clean environment without odd flags on functions, inconsistent calling conventions, multiple naming conventions and the like, than with the junk in place. Days after clean up count more than days before clean up. So do the clean up first, when everyone can see a long project ahead of them, and get the time back later. You will nearly always have to do a clean up - the code that most organizations put in their repository is usually the first that passes all test cases. This does not matter. Do your own clean up for this phase, regression test and don't even discuss your own deltas until you can see straight.
The warning that comes with this observation, is to be realistic about how long your clean up will take. The nastier the tangle, the bigger the multiplier a clean up will give, but the greater the risk that you won't have time to sort it out and do the work. A useful question often is,
How complex is the black box functionality of this thing?' If the answer isNot very!', then you know that as you incrementally comb the complexity out, it will collapse to something simple, even if you can't see the route at all.
The second point of leverage comes from the exponential collapse of complexity in software. If you have a cleaner algorithm, the minimal implementation will be simpler. The less code you have, the easier it is to see the structure in the code, and the chance of off-concept bugs is reduced. At the same time, less code means fewer opportunities for syntax errors, mistyping of variables and so on. Fewer bugs mean fewer deltas, fewer deltas mean fewer tests. It doesn't take long in any team of more than half a dozen people for most of their activity to descend into a mayhem of mutual over-patching, with repository access being the bandwidth bottleneck. Letting loose stuff through the process into later stages can plant a time-bomb that will blossom when it is too late to do anything about it. On the other hand, a frenzy of throwing away in the midst of such a situation can return one to calm in a matter of days.
The third part of leverage is the
skunkworks', so called because the original Skunkworks was located by Lockheed Martin, at a remove from its corporate centre,because it stunk.' This fearsome technique can be used by excessively keen teams in secret on winter evenings or can be mandated by enlightened management. As with everything on this course, we will offer an insight into why skunkworks work.
In industrial age activities like housebuilding, we have physical objects (bricks) which are awkward to manage. Instead of piling up bricks against reference piles to see how many we will need to build a house, we count them. The abstraction from physical to informational gives us enormous leverage in managing bricks. Eventually, we have so many numbers telling us about supply, transport, and demand that we have to organize our numbers into patterns to manage them. We use spreadsheets, and the abstraction from informational to conceptual again gives us enormous leverage.
In information activities such as programming, we don't start with the physical and get immediate leverage by moving to the informational. We start with informational requirements, listings etc., and we have to manage these with informational tools. We have to do this for good reasons, such as information contracts with suppliers, and information sharing agreements on meetings with colleagues contained in our process. We also sometimes do this for bad reasons, such as a too literal translation of informational techniques for managing house bricks into the informational arena, such as counting productivity by KLOCS.
The trouble is, in normal working, we have no leverage. The information content of a meeting's minutes can be bigger than the requirement they discuss! As an activity performed by humans, the meeting has negative leverage! We only win because we can sell our new bit either many times or because in collaboration with other bits it gives vast added value to the process.
This leaves the opportunity to use understanding to gain leverage over information. The skunkworks is sometimes seen as an abandonment of the process in the interests of creativity. Nothing could be further from the truth. One needs a high proportion of experienced people to pull the trick off because they must lay down highly informed dynamic personal processes to get anything done at all. What one trades off is the understanding contained in an exhaustive process, for the understanding contained in people experienced in their craft. From this comes the precondition for the skunkworks. By abandoning the detailed process, one accepts that risk is inevitable, and loses the personal protection given by simple, well-defined objectives. Everybody must accept that a skunkworks may fail, that what it delivers might not be what was expected, and that there may be issues reinserting the results into traditional management streams. But when they work, they work magnificently!
All successful startups are skunkworks. So are unsuccessful startups. A skunkworks effort can turn a major maintainability bloat risk into a small upfront time risk. In these situations, it can be an effective risk management tool.
Copyright (c) Alan G Carter and Colston Sanger 1997