DEV Community

wmchurchill3
wmchurchill3

Posted on

The Perils of Feature Driven IT

Many IT organizations emphasize the need for new features in their product portfolio. Often at the expense of stability, security, and in many cases sanity. The old adage states there are three qualities of software projects; cheap, fast, and good. Pick two. These feature driven shops, in an effort to squeeze out as much functionality as they can, pick cheap and fast. While this looks good in the short term, it is not sustainable. This post will outline strategies and provide reasons for adopting a less aggressive feature schedule.

In their haste to get it out the door, many corners are cut. There is no automated testing. This requires code that does not provide new features and is deemed a waste of time. In reality, these automated tests are a cost savings. Not only do these tests help verify the desired functionality, they also build up a battery of tests to fall back upon to prevent regressions from being introduced when new features are added. They also facilitate safe refactors, which are always necessary as a project ages (more on that below...). These automated tests coupled with an ever growing catalog of good regression tests, help focus any manual QA and UAT. These groups no longer have to run laborious, error prone, and manual regression scripts to validate the application. This potentially reduces the amount of QA time per release cycle, thus reducing costs and freeing up resources for other projects and features. Additionally, because the application is self-testing, there is less risk of the changes producing a production defect or coming back from QA. Both of these significantly reduce costs in the development cycle and improved adherence to the project timeline. The overall result from just introducing automated testing is higher quality of code at a reduced cost.
Automated Test Returns

Report Referenced

Another tactic in cutting corners is hardcoding behaviors or values into the application. There are many intentions of doing it right later, but later seldom arrives. This results in a extremely brittle application that often cannot survive environment promotion (ie. QA to production) and/or difficult to extend. This causes a lot of lost time troubleshooting something that should be as simple as a configuration change. This lost time is time that could be used for developing more features. Again, this is lost cost savings.

Adding features to a brittle application becomes increasing problematic as time passes. Many of the quickest ways to get something working is to copy-and-paste an existing solution (because it works) and shoe horn it to resemble the new feature. The problem with this is while it does allow for similar changes to be made, it makes it almost impossible to implement a large, new feature that deviates from normal business processes. Innovation is stifled, resulting in lost revenue. In today's world, an actively used application is constantly changing to fit the user base. If the application cannot change, it can become harder to use, and its usage declines. This is where refactoring can help. A safe refactor can keep code clean and efficient by removing unused (and potentially dangerous) chunks of code. These refactors can also allow for larger, faster changes as there is less legacy code to navigate. We see similar things in nature. A controlled fire can clean out a section of forest to allow it to come back healthier. In many instances, when a code base cannot be made to accommodate a new feature request, it has to be replaced either with an off-the-shelf offering requiring customization or a purely custom solution replacing the entire application. Either solution is expensive and time consuming.

Infrastructure to support the creation of software is often skipped in the name of new features. If software is used to automate business processes to improve performance and efficiencies, shouldn't we automate the creation and deployment of software as much as we can? The scenario described above where an application is promoted from a QA environment to production should be as simple as a push of a button. Many of us remember staying up late to deploy sites and having to follow lengthy scripts to ensure the site would come back up with the new changes. In spite of our best efforts, there were mistakes made and things overlooked. Automation significantly reduces the opportunities for these oversights providing a much smoother deployment. Many organizations with a significant on-line presence deploy changes to their sites multiple times a day. Some even test in production. How are they able to do this? By having a robust infrastructure to build, test, and deploy their software with built-in monitoring and redundancies to improve resiliency. While this is not free, it does provide more up-time and more rapid feature implementation.

New features can definitely enhance a product, but not at the expense of stability, maintainability, and extensibility. A small investment up front in things like automated testing, CI/CD, and good coding practices can yield a big pay off down the line. This pay off can be fewer unplanned outages, more time for planned work, and faster development cycles.

Top comments (0)