DEV Community

Marco Salis
Marco Salis

Posted on

Frameworks vs best practices: what really matters the most?

Promoting the importance of best practices over frameworks is one of the few fights I believe are always worth picking when it comes to software development (and something I’m very passionate about). Yet, it’s a discussion you don’t often hear about, and if you do, it’s usually ‘marketing’ material, not followed by concrete actions.

This is my humble “manifesto”, something that I don’t want to compromise with in my career. I believe in best practices, but I’m not the only one!

At the cost of sounding like John Lennon, I will try to convince you why best practices and disciplines are more important than frameworks and libraries. I’m looking forward to your comments.

TL;DR: New frameworks and libraries are the bread and butter of a software developer. In such a rapidly evolving field, where technology is in a state of constant change, the ability to quickly learn how to use a new tool effectively is one of the most desirable skills for a professional.

However, as software professionals, we are also required to put the importance of those frameworks into context. Frameworks and libraries die and get replaced all the time. Software development and testing best practices, architecture and team processes persist through time and are much more important for the long term success and maintainability of a project.

Not only that, it is also way more crucial for our careers.

For somebody to be considered a software professional, the ability to always keep those best practices in mind, constantly apply them to code (even when deadlines approach) and mentor others towards them is key. It is a fundamental mindset that accompanies us in our everyday work. Or at least, it should be so!

A small disclaimer

Nothing in this post is revolutionary or even innovative, nor is it the result of any specific talent of mine. It is just a collection of ideas and opinions that I formed through the years by either learning from colleagues or (more or less costly) mistakes, or that I read about in some amazing, inspirational books (some of which I list at the end).

Moreover, I don’t want to teach anybody anything or sound patronizing: I am pretty sure that most developers already know everything I talk about here. I just want to highlight how important all of it is, and urge everybody to fight for best practices, use them and change the mindset of all of those around. Make business and project owners aware of the fact that this is not a dogmatic fight; it is professional behavior that, ultimately, is going to ensure success to whatever “makes the money”.

What developers like

Pretty much every (good) software developer I know is passionate about new technologies, and experiments with them in their own free time. It is probably why we chose to become developers in the first place: we like to solve complex problems, and find the best technology to solve them with.

At the same time, there are several different aspects of the job that motivates us. There are those who only thrive if they use new stuff all the time, and are comfortable in using many different technologies at the same time; those who prefer focusing on a single technology, and know every single aspect of it, grow with it as well; and there’s everything in between. Neither is better than the other, in my opinion.

There’s a different angle to it as well. What makes a life of computer programming jobs enjoyable, or at least bearable? Some of us like technology for the sake of it, using it in a creative way and seeing what happens. Some others like the final result better, and love the technology as a means to create something.

Again, either are equally great, but the average business might disagree: most software companies make money out of a finished product or service. For it to be profitable and valuable, it must work and, more importantly, last through time.

I might be generalizing here, but I believe there is a trend in most people's careers where, at some point, the balance starts pointing towards the stability of working products. This, at least, is what happened to me. I still like new technologies, but where I find the most pleasure, career-wise, is to use those to build something durable that I can be proud of. In my opinion, this paradigm shift is due to a series of reasons:

  • Career roles. In a senior or management role, your responsibility and accountability towards a product grows. You are expected to work with certain standards, and the final result must reflect your level of experience. Regardless of what you do, there is usually less room for experimenting for the sake of it.
  • Age. Yes, I might be shooting myself in the foot here, but you don’t learn as quickly as when you were 20 if you’re 40. Admittedly, a tiny bit of laziness can seep in as well, and it is natural to find comfort in stabler tech that you know well.
  • Deja-vu effect. As you change job after job, you slowly get bored of seeing the same mistakes made over and over again. That monolithic application (or 5000 lines of code class) you have to untangle, that “quick throwaway project” you have to fix because it is now a production release that doesn’t work. At the same time, with experience you learn new tricks on how to avoid all the above situations, and you want to put them in practice before it’s too late.

Mind you, I’m absolutely not saying you can ever stop learning new things, at any time in your career. The moment you do it, you will be obsolete in no time. The most important thing here is to see that all of the motivations above push towards best practices. The technology you use, in the long term, won’t matter as it will be replaced. You will learn a new framework, and another, then and another. What matters is to make the switch easier, and ensure a product is easy to change (in the short, medium, long period), understandable, stable and maintainable.

An ever-changing field

The ability to learn quickly is a desirable, even required career skill for a developer, if you want to survive and adapt to a constantly changing tech ecosystem; even when you want to specialize on a specific platform. It is what the market asks for: full stack developers with expertise on 5+ different languages; mobile developers that know how to code on Android, iOS, but also know Flutter and machine learning, have 10+ years of experience of server development and, why not, maybe even a degree in astrophysics (!!).

Let’s just take the Android platform as an example, since it is what I know best, and where I decided to specialize for the time being. 10 years ago or more, you would use Eclipse as the IDE, Java as your language, Threads or little more for concurrency, static XML UI layouts and, to help you with common use cases throughout projects, more or less well known third party libraries.

Nowadays, you have a different IDE (Android Studio, based on IntelliJ), a different language (Kotlin), structured concurrency (coroutines), declarative UI tools. The third party libraries you use are, for the vast majority, different ones.

Everything has changed, except for the base concepts of the OS, and the very basics of its SDK. The capabilities of the platform have also expanded enormously, with hundreds of native components you can plug in for a specific functionality. Nobody on Earth knows and uses all of them (if they say they do, they’re lying). An Android developer who decided to change technology 10 years ago, having to get back to the platform these days, would throw a fit and probably decide to do something else entirely (heck, it’s hard to go back to it after 5 months).

If you had an Android application that you started 10 years ago, there is a good chance that, through time, you would need to change a lot of its code to keep it up to date with the platform’s tech. There is something, though, that wouldn’t change if you rewrote that app from scratch. The object-oriented programming structure and rules, the patterns, the code quality best practices would be exactly the same, and what’s more, they’ve been around for decades. They transcend platform, technology, programming language and even architecture.

The RxJava example

Ever had that one colleague who joined the company, added obscure dependencies to the project (to achieve the most trivial things) and “infested” the whole codebase with them, save, then, quit a few months later? Have you ever been the one that joins next, and are tasked with fixing their mess? Not pleasant, right?

There are hundreds, thousands of frameworks and libraries for the Android platform only. Some are incredibly popular and written and maintained by whole teams or companies; some only used by a handful of people, and written in somebody’s spare time. Their lifespan can vary significantly based on this, but also on the technology they rely upon. Some die a slow and quiet death, some others in the spotlight, accompanied by the angry tweets of desperate developers that were relying on them more or less significantly.

RxJava is neither of them. It is an incredibly powerful and known framework for reactive programming, part of a wider family of Rx* libraries that span a remarkable number of languages and technologies. It will possibly outlive some of our careers, and it is actively maintained and supported. It became very popular on Android many years ago already, as it introduced a new versatile way of asynchronously dealing with and manipulating streams of data. It made up for the lack of equivalent tools in the platform at the cost of a, erm, slightly steep learning curve, some performance caveats and (pure personal opinion) some debatable naming choices.

At the time, I started hearing developers talking about it as the “next new big thing” (as you often hear), and indeed, it became so. So, admittedly a little late, I jumped into the Rx train and, after the above-mentioned steep learning curve, I started to use it and actually like it. Pretty much every developer I knew started using it. Pretty much every job spec required that you were proficient with it.

Then something horrible happened. I saw developers using it literally everywhere. Use it for everything, even when it wasn’t necessary nor recommended in the first place. I saw it infect every layer and boundary of their architecture. I saw libraries over libraries that solely relied on it, and even talks of architectures based on it! And when I asked the reasoning about it, the most frequent answer I got was “Because it’s cool, and powerful! Plus, it’s the de facto standard on Android, it won’t be replaced anytime soon!”.

A few years later, enter Kotlin, a whole and better new JVM language. Enter coroutines, a new and revolutionary way (for Java developers) to do concurrency. Enter Kotlin streams and sequences and its powerful, native language constructs, and those RxJava cult developers trembled. Enter the new Kotlin flows framework, and those developers started shaking.

Now, I know this might be controversial, but their reasoning was wrong, and a better tool did come up. RxJava will still be in use for years, and there are certainly use cases where it’s still the best solution over its Kotlin equivalents, but it is getting outdated. A new framework, built into the language (which is always a stronger - but not absolute - guarantee of longer life), has come and replaced it, and now many developers (and their successors) are left with a codebase that will either be unnecessarily complex, or take painful months of manual rewriting.

The reason is that they favored a framework over some of the most crucial best practices and architecture principles. Layer boundaries, abstractions and a more accurate analysis of the statement “frameworks are implementation details” would have saved them from this situation.

(I’m just waiting now to see the first example of a codebase where most of the functions are suspending functions: there’s always ways to abuse a framework!)

Frameworks vs best practices

I mentioned before that, as software professionals, we need to be able to quickly learn new paradigms and technologies, and that the knowledge of one of those at a specific point in time is only valuable in the short term. Once you have learnt, during your career, to use 100 frameworks and libraries, learning the next one is a matter of adapting what you know, familiarizing with the domain language and making sure you use the framework following the key best practices of software.

Yet, why do we keep seeing “knowledge of library X and framework Y” at the top of job requirements? Why do we focus primarily on those during interviews? Why do 95% of all talks at IT conferences are about the shiny next new thing?

I feel that those key, basic concepts and best practices lie dormant within most of us; they are sometimes ready to be used when we want to impress an interviewer or in the rare occasions we decide to “talk architecture” with our colleagues; they seldom come across code reviews when we have a tiny bit more time to think a solution through because we don’t have to ‘deliver yesterday’.

These concepts are also not taught as well as they should in university, and senior developers don’t put enough effort in making sure that less experienced colleagues learn them, in theory and in practice.

Sadly, I believe that Google and the Android ecosystem, for example, are somewhat guilty to be too silent on the topic. Big IT companies and software platform owners understandably invest most of their money and time in making developers engaged; the more new stuff they introduce, the more the average developer is hooked. Nobody wants to work in a slow, static environment, and Google does a great job at that. There’s tons of videos, tutorials and events about the “shiny new thing”.

But they miss a big opportunity to ensure the future stability of the platform (and the quality of its apps) by mentoring developers in best practices and architecture. Not only that, they sometimes release frameworks that are extremely hard to adapt to a good architecture and the concept of separation of layers, without giving any advice about it in their documentation.

Google is not alone in this; I’m afraid it is (perhaps always has been) the industry standard. Somehow, it seems like marketing a platform is more important than ensuring what we’re ultimately here for: produce useful, profitable (where needed), quality software. If we don’t do that, we’re not professionals. We can’t complain when software developers are addressed as unreliable, less disciplined than other engineers, and lacking in the structure and scientific background of other technical fields. It’s not like lives depend on it right? (ask that to Boeing or Tesla).

I challenge you to think about how much time in the last year you spent learning everything about the new version of framework X, and how much time you have dedicated to honing your refactoring skills, or re-reading that book about design patterns.

I believe (and, again, many of us do too) that best practices should be thought about and applied every day, to every decision we make, even the most trivial. Best practices should take over the most important place in our heads when we start a new project or feature, and consider frameworks and libraries for what they really are: amazing, versatile yet replaceable tools, implementation details that we will likely change at some point..

So what are these best practices?!

“Best practices” is a very broad term. It can mean everything and its opposite. And the subject is so vast and detailed that there are hundreds of books about it, from complex software theory to design patterns for very specific use cases.

I’ll do my best to make a (non exhaustive) list of what I consider the most important aspects of the subject. This post is not about the details: I hope I get the chance to write more about the actual best practices in the future, but then again, I won’t be able to explain them better than the authors I mention below!

Let’s recap the categories in which (I believe) we can place best practices, and then list a super quick overview of each one of them:

  • Architecture and layer separation. The big picture of a software structure. How components interact and how independent they are.
  • Clean code and S.O.L.I.D. “The devil is in the details”. Every single line of code and every class matters, and should follow certain rules.
  • Continuous refactoring. No codebase should be static: improving it constantly is the only way to keep it tidy and working over time.
  • Testing. Not only to ensure software actually works, but also to define behavior and allow continuous improvement not to break it.
  • Development processes. The same rigorous discipline a codebase is subject to must be applied to a Team: code reviews, continuous integration and work methodologies are as important.

Architecture and layer separation

Architecture is the one design aspect of a project that’s most often overlooked, almost belittled. Unfortunately for us, it is also the most critical in the medium and long term life of a codebase. A project with an inconsistent codebase, a poorly thought (or not thought at all) architecture, or codependent components is doomed to accumulate technical debt, and become a tangled mess that is going to cost money to the business, and coronaries to those who need to fix it.

Here are a few things to keep in mind when creating a new project, and throughout its lifecycle:

  • Choose an architecture, and stick to it in a consistent way. It can be adapted to the project’s needs, as long as the main principles are respected, but its structure must be kept religiously.
  • Projects grow in time, they rarely shrink in size. An architecture must be flexible enough to allow expansion. But don’t be afraid to choose a more complex architecture from the beginning if you feel like the project is going to need it in the future. A slight overhead at the beginning is going to be handsomely rewarded in terms of stability and maintainability in the long term.
  • Unless your project is tiny and you are 100% sure it will remain so, MV* patterns are not enough. They can be the base upon which a more complex structure is built, but they barely scratch the surface of what separation of layers really is. They usually crumble when a project needs more than trivial business rules and a basic UI.
  • Organizing the vertical structure of a codebase is not enough. Pay attention to the best practices for splitting code by feature as well, and (again) be consistent with it, or the complexity of the codebase will grow exponentially as it gets bigger (splitting features by package is a good start).

I’ve personally been using the Clean Architecture approach for quite a few years now, and I feel like recommending it for most scenarios. It’s flexible, scalable, quite easy to understand and it is adaptable to any kind of software technology and platform.

That doesn’t mean it is the only way to go, or that it’s the silver bullet for every development problem. As long as you follow the guidelines above, most layered, structured architectures are already a million times better than total chaos (or a poorly thought framework-based solution).

Separation of layers

Any decent architecture’s prime concept is separation of concerns. It’s not only about the Single Responsibility Principle. Using meaningful layers is the only way to structure a project in a tidy and consistent way; it is how concepts and components belonging to different domains or levels of abstractions get separated.

Separation of layers also helps with:

  • Testing at the right level of abstraction and with the correct framework
  • Understanding requirements: figuring out which layer and type of abstraction code belongs to also helps with better understanding what it is supposed to do (and how it should behave)
  • Navigating code more easily by providing a set of rules on where to place it in the first place
  • Onboarding new developers by providing a common, known approach to writing code
  • Pipeline work on new features, allowing multiple developers to develop on the same use case without stepping into each other’s toes
  • Migrations to new technologies: a layered architecture better allows code reusability across platforms, since it relegates frameworks as implementation details.

Layers must also be protected by boundaries and abstractions, otherwise they’re just a good theoretical exercise. Do not leave separation of layers to the individual's goodwill. Use separate modules, dependency inversion and enforce layers consistency at compile time, so that code that shouldn’t be accessed by the wrong layer is actually impossible to call.

Clean code and S.O.L.I.D.

Architecture is fundamental for a stable high level structure of a project, but it’s not enough to ensure its quality. If we look at a codebase from the opposite point of view, each line of code builds up towards its final structure.

If we write poor code at a fine-grained implementation level, the project will be poor, buggy and difficult to understand. We need to apply the same discipline, and an even larger set of rules, to every line of code we write, no matter how trivial that code is.

A few of the many details to keep in mind:

  • How code looks matters, and it is the signature of the work of a professional. Using an automatic formatter and a consistent set of rules (preferably enforced by automated tools) is imperative; vertical distance and ordering between functions are crucial to ensure readability. Apply the same discipline to documentation and comments as well.
  • Naming is incredibly important. Choose meaningful, unambiguous, unique names for everything (packages, classes, functions, variables, even resources), and when necessary make sure the domain language is documented somewhere.
  • Keep functions and classes as short as possible. 300 or less lines for a class is a good realistic target in my opinion. The only way to achieve this is to use a good structured architecture and refactor often.
  • DRY (Don’t Repeat Yourself). Code repetition is the root of all evil in software. Refactor so that it never happens, create abstractions and use generic types and factories to find a common approach to similar problems.
  • Use universally known, recognisable design patterns whenever you can apply them to your problem. If you use a domain-specific pattern or solution, document it properly and use it consistently across the codebase.
  • Use the S.O.L.I.D. and OOP principles and constantly check your code for violations of those. Always keep in mind the general (and language specific) best practices about error handling, cancellation, termination, synchronization, encapsulation, polymorphism, testing.
  • Get the best out of your language innovations, and consider all programming paradigms (e.g. functional, reactive, sync/async) when solving a problem.

(a gazillion more of these are described in detail in the books ‘Clean Code’ and ‘The Pragmatic Programmer’ listed below)

Continuous refactoring

Technical debt is the number one enemy of every project. And the only feasible way to avoid it piling up is to continuously improve the code you are working on. The “Boy Scout rule” needs to be constantly kept in mind in our daily work, so that we always leave a piece of code in better condition than when we found it.

Massive overhauls of whole products, big refactorings and (even worse) full rewrites don’t work! By the time you reach a situation where this is the only way forward, it’s too late. And by the time you are finished, you might end up with the same problems as before; it’s hard and expensive for a business to allow those, and the project owner is going to be resentful to the development team for allowing the codebase to decay in such a state.

There are many techniques that prevent this situation from happening. You can achieve this by following best practices and design patterns in your code, but also constantly practicing the “art of refactoring”. It’s a difficult skill to master and the only way to get better at it is to exercise it all the time.

Testing

There is no way to safely apply a successful constant refactoring strategy if the code you are touching is not tested. And if your testing strategy is not thorough, you will release code that is broken, and your business will pay for it. It’s as simple as that.

Again, there are tons of books that delve into the subject and automated tools that make testing quicker and easier. TDD or not, make sure you test at different levels of abstractions and from units of code to the integration of components, and that the development process is urgently stopped when one of the tests fails.

Also, tests are almost useless unless they are fast enough to be executed often, and run regularly by an automated tool every time new changes are introduced to the development branch.

Once again, make sure that frameworks don’t get in the way of your testing strategy, and that your tests are easily maintainable and changeable. If they are also easy to read and understand (e.g. using common naming strategies such as GIVEN/WHEN/THEN) they will also constitute useful documentation for the component they test.

Development processes

An ensemble of people that follow different steps to produce code is not a Team. It is very difficult to deliver a high quality product and a consistent codebase when the people working on it don’t agree on a set of common best practices and processes. Generally speaking, these are aimed at enhancing communication between developers, creating a consistent workflow throughout a series of development cycles and detecting problems in those flows as early as possible.

Some of these include:

  • Practice pair programming whenever possible. It should be used not only as a mentorship tool, but also as a way for teammates to learn from each other and form bonds.
  • Regular, rigorous but constructive code reviews. This is where consistency rules are checked, and best practices that escape the automated tools scope are enforced. The team must be involved and engaged when it comes to everybody else’s contributions.
  • Effective and consistent version control and branching strategies. Branches must be traceable, preferably linked to a task management tool and naming and lifecycle rules should be agreed upon by the team.
  • Usage of a fast, reliable and automated **continuous integration tool **where to run tests, static analyzers and linters before new code is pushed to the development branch (and at other critical points, e.g. releases)
  • Choice of a well structured, known team methodology (Scrum, Agile, XP). Reinventing the wheel in this area just doesn’t work; picking a framework that suits the team and project size is important, but so is getting the team trained to make sure everybody knows what they’re doing.

Last but not least, keep in mind that the best way to ensure that these processes work is to form a collaborative and supportive Team environment; friction is likely at some point, and not everybody will always agree 100% with the details. Nonetheless, it is important that each team member knows the rationale and theory behind every decision and accept it.

The reaction

After reading about these best practices, I know many of you will go “Duh! That’s all obvious stuff!”. And indeed, it is. But think about how many times we violate or compromise on them because we are in a rush or we think it won’t matter; think about how many times these bad choices came back to bite us in the a$$ some time later.

It’s not enough to know that best practices work. Most developers will tell you they agree. As professionals, we have to force ourselves to use them constantly, even (or especially) when we are under pressure and the stakes are high.

Conclusions

Frameworks are implementation details. They are short lived, replaceable and, however much indispensable, merely a tool. As professional engineers and developers, we should place the science, the best practices and principles that have been around for decades, and are proven to work, at the core of our day to day technical choices. Keep those in mind for every trivial decision we make, and prevent frameworks and libraries from taking over your architecture and codebase.

As we progress with our career and gain experience, it is also our duty, as professionals, to pass these best practices on to those who are less experienced, **and make sure they are talked about in the development community**, be that at conferences or with colleagues in front of the coffee machine.

Bibliography

As mentioned before, there are many many books on the subject. This is not an exhaustive list, nor do I imply that there aren’t better ones. They are the ones that inspired me in the past, and those that I have managed to test in real life. And somehow, I have also been able to verify that they work!

From the “Clean” collection by Robert C. Martin:

  • Clean Architecture: A Craftsman’s Guide to Software Structure and Design, Robert C. Martin [Prentice Hall, 2017]
  • Clean Code: A Handbook of Agile Software Craftsmanship, Robert C. Martin [Prentice Hall, 2008]
  • The Clean Coder: A Code of Conduct for Professional Programmers, Robert C. Martin [Prentice Hall, 2011]
  • Clean Agile: Back to basics, Robert C. Martin [Prentice Hall, 2019]

Other books:

  • The Pragmatic Programmer, Andrew Hunt & David Thomas [Addison-Wesley Professional, 1999]
  • Design Patterns, E. Gamma, R. Helm, R. Johnson, J. Vlissides [Addison-Wesley Professional, 1995]

Top comments (0)