DEV Community

Vinicius Stock
Vinicius Stock

Posted on • Updated on

Before shipping a new feature

Introduction

Shipping new features is exciting and as software developers, we want to see our code running in production. However, there are some actions, questions, and steps that one can take before delivering tasks that will help improve the quality of the new functionality.

Acts, like building the code for graceful failures, taking advantage of refactoring opportunities, understanding the impact of the features being delivered, monitoring and manually testing the changes, are only a few examples pre-release activities.

Questions to ask before delivering a feature

It is essential to always have in mind the impact new changes can potentially cause for the end customers. Reflect on what the consequences would be for the users if something went wrong regardless of how simple the commits being merged are.

Also, reflect on how important the feature being delivered to the end-users is based on their perspective. How can it be made better?

A few questions to ask before shipping our beloved code could be:

What will the user experience if this page/service breaks?

How difficult would it be to revert these changes?

How big of an impact would a break like this have on our brand's image?

What other features can leverage this one and deliver more value to our users?

What functionality could be missing in this feature?

The answers to questions like these will outline steps that can be taken to improve the quality of the shipped feature.

Before shipping checklist

Here is a checklist of actions that can be done before clicking the deploy button.

Create tests for impactful corner cases

Testing every single edge case is certainly not needed. You will most likely not cover all of them and it will get your test suite bloated.

However, corner cases that lead to impactful failures must be tested and covered with fallback functionality. For example, if a specific page relies on fetching data from an external API. What happens if the API is down? Does the page break? Is it partially populated? Does it redirect to a different page?

Exploring these most impactful edge scenarios will often prevent production headaches in the future. And if one slips through, reproduce the error in tests, fix the undesired behavior and then push it up the stacks.

Manually test changes

Unfortunately, automated tests don't always catch everything. Take the time to explore the product and the new feature being delivered. Experience it as the customers would and ask yourself if you're satisfied with the solution.

If improvements related to the feature are identified, make sure they are tracked with a user story. They might not need to be implemented right away, but if they are accessible in the backlog, eventually they will be worked on.

Get feedback from the customers

We often have ideas and opinions on how the product we develop should be used. However, what truly matters is how our customers use and perceive the features we deliver.

Getting feedback from end-users is a bittersweet feeling. It can be tough and insightful at the same time. They might criticize the product you worked hard to build while also giving you the knowledge of what they need or expect from it. Be open-minded and ready to shift focus if necessary.

Setup adequate monitoring

In software, a lot happens under the hood. It can be far from easy to tell if everything is fine just by playing around with the user interface. Logging is one of the ways used to peek into the system that is running in production and get a sense if things truly are fine.

But what would you log? That is not a trivial question. It depends on the system, the product and the team building it. Too much logging might cause an unnecessary overhead while too little will give you zero context of what is happening in the application.

Some examples of things to log are:

  • When expected errors happen (e.g.: inside rescue/catch block)
  • When an undesired, but necessary behavior occurs (e.g.: users are redirected to a 404 page)
  • Key metrics (e.g.: number of items processed in the background)
  • Business indicators (e.g.: number of users that signed up using Google)

Refactor code

Stumbling upon pieces of code that could be refactored (for quality or reusability purposes) occurs frequently. Improving the code or making it reusable for other developers might cost a little bit of time now, but it will pay off in the long run.

If no developer ever pays the price of refactoring any code, technical debt can pile up until the codebase will slow down the development of new features. At that point, you're paying the price of all the refactoring that wasn't done.

If a section of the code can be improved with some extra work, do it. Make the next developer that will touch that part of the code happier.

Conclusion

This is my take on a feature release checklist. What would you add to the list? What practices do you have in your teams? I'd love to hear it!

Top comments (0)