Whenever I evaluate tech, all I usually see are the great things about it. It doesn’t matter if it is a database, a web framework, a programming language, a library, etc. Those who talk about tech tend to be either fans of it or the creators of that technology. It is in their best interests to get people to use it. Emphasizing the positive aspects of a piece of technology helps do that.
This attitude doesn’t make anyone a bad person. This happens to be standard marketing practice. You and I would probably do the same thing for stuff we’ve built. If not, then it is the first piece of feedback we would get on our pitch for it.
But nothing is perfect.
Most things are designed with a purpose or a set of purposes in mind. These will fail in some cases though. Even technology that’s meant to be as general use as possible will find a situation where it does not excel.
Take MySQL as an example. I love MySQL. Been using it for years. Many different types of applications can be built with it.
However, I would not use MySQL for analytics or search. Some people try. It’ll work for a little while. But MySQL was not designed for large scale analytics or search. It becomes extremely slow at a certain amount of data and that slowdown doesn’t happen linearly. It happens suddenly and with little warning.
ElasticSearch on the other hand is great for analytics and great for search. It was built to handle those cases very well. But while the creators of it are working on making it more resilient against failure, it still isn’t as reliable as MySQL. MySQL makes it easier to prevent losing your user’s valuable data.
Catching a resiliency problem isn’t exactly easy though. Developers try to make sure their software will actually run. Good software will rarely fail. However, it will still fail. How a piece of technology handles that failure matters a lot because it shapes what we would use it for.
MySQL handles failures really well so I will use it as my definitive source of data. ElasticSearch is less reliable, but it performs analytics and search queries on large data sets much faster than MySQL so I will use it for analytics and search.
Not understanding these limitations can be extremely costly down the line. I’ve seen numerous companies build analytics with MySQL. It works fine at first. But problems always start cropping up. Solving these problems gets exponentially more difficult as the amount of data grows. The choice becomes either continuing to plow on as is, or to rebuild the system.
Rebuilding is rarely easy though. Swapping out one database for another is the opposite of trivial. My favorite example of this is the 10 person years and 3 calendar years that Yandex spent switching from an Oracle database to a Postgres database. So the choice is to spend a ridiculous amount of time maintaining the status quo or spending a ludicrous amount of time rebuilding things.
Sometimes it isn’t even a choice. Sometimes there is literally no way to make an existing piece of technology work at the scale you need it to work at. That makes rebuilding an even scarier proposition. In this situation there will be pressure to get a replacement in as quickly as possible. Rushed decisions are rarely the best decisions. Rushed decisions are how teams end up in this situation in the first place.
Being able to prevent these issues by picking the best technology for the job is the most pleasant situation to be in. However, we’ve come back to the original point in that very rarely will a technology’s limitations be publicized. ElasticSearch happens to be one of the few that does their best to document their flaws, but that’s a luxury developers don’t often have.
As mentioned above, building quick prototypes will rarely expose the issues that will be seen at scale. Anything will work as a prototype.
You could rely on colleagues who have experience with the technology you are evaluating. But that experience was most likely gained by learning things the hard way, which also happens to be the expensive way for a company.
The best way would be if promoters of technology were more up front about what they actually designed it for and what the limitations would be. This would actually be beneficial for them in the long run. Fewer people would attempt to use their tech for something it isn’t built for. That means fewer people will have bad impressions of it.
People hit these limits in high pressure situations. That hinders their ability to think calmly and say “Ok, next time I won’t use this thing for this type of application.” They’re going to be stressed. Their reaction is more likely to be “This thing ****ing sucks! I won’t ever use this again. No one should use this ever! I’m telling everyone I know not to use this.”
Preventing this reaction would do wonders for marketing technology.
Unfortunately, standard marketing practice is the way it is because it works. When Technology A has no flaws listed and Technology B has a laundry list of flaws, we will naturally gravitate towards Technology A because it looks perfect.
What we need to do as consumers of that technology is to be cognizant of this reaction. The makers of Technology B should be praised. The makers of Technology A should be met with a large amount of skepticism. The way to get people to list the flaws in their work is to start giving greater respect to those that do.
This post was originally published on blog.professorbeekums.com