Back in the day, a life was easy. N-tier architecture reigned the world and each tier was implemented roughly in one technology, for example:
- data layer - relational database
- business layer - Java EE/.Net
- user interface - HTML + CSS + Java Script
Then microservices and functions came along, together with the approach called Polyglot programming (and related term PolyglotPersistence which pertains specifically to the data layer).
So, don't get me wrong. I completely accept the fact that there are many languages out there and different languages can be a better fit for different things.
If I want to provision infrastructure - I use Terraform. If I want to do some scripting - I use Bash/Python. If I want to analyse data I would go with Python or R.
What I strongly disagree with, is introducing new technologies to projects just "because I can" or "because I want to play with something new" or "because technology X is a bit better/fancy/popular than technology Y".
Different languages on the project
As architects and developers we design and implement systems so that they meet all customer requirements and at the same time we strive to make them as simple as possible.
Simplicity is a key for maintainability and extensibility. Every new requirement, every new test we create, every new library we use - makes it more complex.
And introducing a new language is one of the most complex things I can imagine....
This complexity is not about the syntax. Syntax is usually easy. And it's not about writing one-off code. Writing a code that you plan to throw away tomorrow is super easy.
But when we write enterprise grade systems the bar is set much higher. Just a few things off the top of my head:
- Idiomatic ways of approaching common problems (logging, error handling, input parameters etc.)
- New libraries (when approaching new language, usually we have to learn completely new libraries that do exactly the same as libraries we already know but in completely different ways)
- Packages/modules/dependencies management (again: completely new way of doing the same things)
- New toolset (compilation, testing, debugging)
- Configuration of dev. environment
- Integration with CI/CD framework
And of course, when one person in the team, starts to use a new technology, all other members, sooner or later, have to learn it (to do code reviews or to stand in when needed)
Different databases on the project
Let me tell you my story:
One day I was asked to provision MongoDB and ElasticSearch for one of the projects.
I had no experience with those databases, but spinning up containers and exposing the endpoints was super easy. Development team was happy.
Then, a few "small" activities came along:
- High availability - Turned out, that I had to set up two 3-node clusters. And of course - setting up cluster for MongoDB was very different than for ElasticSearch. After provisioning - I had to test if both clusters work.
- Backups - For MongoDB it was quite easy, but for ElasticSearch it turned out that shared volume was required (so NFS server had to be configured). Again - backup and recovery procedure had to be tested.
- Security - users/roles/which ports must be open, which ports can be closed etc. Completely different ways of doing the same things in each database.
- Patching, monitoring, performance tweaking and troubleshooting.....
The conclusion is the following:
Of course one can learn how to operate any database in the world. But it requires significant effort and time. Each new database brings new tools, new concepts and tons of documentation.
My conclusion - whenever possible stay with one generic database to handle most of the traffic. Add new databases only to handle very specific workloads and only when really necessary.
Some examples of specific workloads are: caching, graphs, big data, time series. However, we should also bear in mind, that current databases usually support more than one type of workload.
And what is your take on this? I would love to hear your comments and experiences!
Top comments (0)