loading...
CheckGit

Software Development: Some Things Never Change

alexfawkes profile image Alex Fawkes ・5 min read

As a field, software development moves fast. New technologies come and go, methodologies become popular and then decline, style and aesthetic preferences change, and user expectations rise as fast as technologists can deliver.

This backdrop can be misleading. Sometimes, things don't change. It's important to take the occasional opportunity to reflect on how we got where we are today. Here's a few general software development practices that, at a fundamental level, have remained constant through the years:

Code Reuse

Even back through the 1960s, developers at early tech companies like Xerox were pushing for the increased abstraction and modularization that have allowed us to develop the large-scale software systems we have today. This push eventually gave rise to the higher-order machine languages like C, then bytecode languages like Java, and on to dynamically-interpreted languages like Python. These languages allow developers to produce more functionality with less fuss around implementation details like memory management and data structure.

The major programming paradigms come into play here as well. Structured programming, object-oriented programming, and functional programming all have roots going back to the 1960s. Structured programming promotes reuse of logic through control flow ('if' statements and 'while' loops). Object-orientation modularizes and encapsulates data and algorithms beneath easy-to-understand, interchangeable interfaces. Functional programming promotes reuse through deterministic behavior and minimization of side effects - it's safe to reuse code if you know you can just throw out any part of the result you don't need.

The current trend seems toward polyglot environments - breaking apart software systems into modules written in separate languages and paradigms depending on suitability to domain, but interoperable through standard protocols. The rise of open source is also a good example of code reuse, where basic foundational technologies can be constructed and shared by the larger community, allowing projects to focus more fully on their differentiating value.

Design & Architecture

The author of The Mythical Man-Month, Fred Brooks, has stated that one of the hardest lessons learned through his experience with the OS/360 project was the importance of a consistent architectural vision. Initially lacking one, the team found that the codebase became too difficult to understand and reason about.

Not every project is on the scale of OS/360, but the basic importance still applies. In general, it's better to have a clear and consistent vision of a piece of software before jumping into coding. It doesn't have to be a formal process resulting in a mountain of documentation, but every change should start with an understanding of its general scope and purpose within the larger system.

Personally, I try to keep SOLID principles in mind before starting any implementation. It doesn't need to be dogmatic, because engineering is all about context and tradeoffs, but it's important to have a common language to use in order to understand, analyze, and discuss design decisions. This also goes a long way toward maintaining quality in the long-term, especially for large codebases and distributed teams.

Source Control

Bell Labs was using SCCS to manage their software revision history back in the early 1970s. Proper version control is essential for managing the integration of complex changes - again, especially true for large codebases and distributed teams. It also plays an important role in quality control, allowing developers to pin the first reported occurrence of an issue to a general timeframe in version control, which can then be inspected to help debugging.

Today, Git has become the default version control system. Its distributed model makes it easy to fork existing projects for custom changes, and provides a powerful and convenient mechanism for contributors to request their changes be merged back to the master branch for official distribution. Whatever source control system works for your team, be sure you have a sane way to track how, when, and why your system is changing over time.

Testing & Quality Assurance

Traditionally, QA processes were borrowed from existing engineering disciplines, which had originally developed them with an eye toward physical systems. Large organizations would produce detailed documents outlining specific manual testing procedures, which were carried out by teams of trained, dedicated testers.

Today, there's a definite trend toward minimizing manual QA activities, especially with the rise of developer-driven automated testing. There's still a place for manual QA. For one, it's important that software be tested by someone other than the author. Secondly, it's good to have a non-developer sit down with the software - what's intuitive to a software developer familiar with the project isn't necessarily intuitive to the end user.

Automated testing is generally good practice, but only to the extent that developing and maintaining the tests actually contribute to end user value, and couldn't more effectively be performed manually (if less frequently). Either way, testing is and will continue to be an essential practice in the field.

Talking to Users

For a variety of reasons, software developers have a tendency to forget this one. That's unfortunate, given its special importance. Ultimately, software ends up in the hands of end users, who are generally using it for things that aren't software development. If you're not talking to your users, you don't know what you're trying to build.

Early in software development, users were specialists who worked in-house, making communication relatively easy. With the advent of microcomputers and mass market software in the 1980s, large companies developed teams of customer relations experts to keep programmers informed of end user needs.

Today, with the dominance of Agile processes and their derivatives, it's common for developers to talk directly to end users. Ideally apps have multiple built-in feedback mechanisms, and social platforms are made accessible for communities to form around a given technology. However you do it, you should definitely stay in touch with your users.

And so...

Software is a special animal. Its extreme flexibility, ease of distribution, and near-zero marginal cost makes it possible to build and deploy systems so massively complex that no single human can possibly understand them in full. This has been an identified problem since the very early years of the practice - limitations in software development come down to intelligent management of complexity, and abstracting out components to a scale that individual contributors can understand and reason about.

All of these practices - code reuse, design and architecture, source control, testing and quality assurance, and regularly talking to end users - are essential to the production of good, working software. They've been in place since the infancy of software development, and if I'm allowed to speculate, will continue to be practiced for as long as people are writing software.

This post was originally published to CheckGit I/O.

Discussion

pic
Editor guide
Collapse
dance2die profile image
Sung M. Kim

Thank you, Alex.

This post is relevant to the question I asked so linked your post there (in case someone comes across to that post). 🙂

Collapse
alexfawkes profile image
Alex Fawkes Author

Great! Thanks for reading, Sung.

Collapse
josephmancuso profile image
Joseph Mancuso

Did you read the whole mythical man month book?

Collapse
alexfawkes profile image
Alex Fawkes Author

Yes, actually, but it's been about seven years since. Why do you ask?

Collapse
josephmancuso profile image
Joseph Mancuso

In that book, one of the concepts is that software development is not a perfectly partitionable task. Meaning a 10 month development job with 1 developer can't be broken into a 2 month development job with 5 developers.

This also falls a bit into brooks law that also says adding developers to a late project makes it even later.

What are your thoughts on this? Do you think software development is partitionable? Or do you think that concept was just a product of it's time (since it was written in the 70's) and we have come a long way to improve that aspect of software development?

Thread Thread
alexfawkes profile image
Alex Fawkes Author

It's difficult to partition software development work and always will be.

I don't think this is specific to the field; two mathematicians aren't going to produce a proof in half the time. Two novelists aren't going to write a book that's twice as good. Two artists, etc.

He goes into the underlying mechanics - it's due to communication costs, which scale geometrically as you add additional participants. It will always be a factor.

We've made some progress there - better abstraction and modularization help limit communication to defined interfaces, for example. Top dev orgs have learned to prefer tight teams of highly-skilled developers (versus large teams of low-skill developers).

And we'll continue to make progress - the trick is to find ways to reduce necessary communication, which is going to be some combo of reducing the number of people involved, reducing the info necessary per person, reducing the number of people any individual needs to communicate with, and maximizing the effectiveness of existing contributors (rather than adding more).

Imagine a network graph - now remove as many edges as possible, and minimize the weight of remaining edges (nodes are contributors, edges are communication). And don't accidentally cut essential communication. ;)