DEV Community

Viktor Demin
Viktor Demin

Posted on • Originally published at Medium

Poor development practices I still see in startups šŸ‘€

Poor development practices I still see in startups

Hi, Iā€™m Viktor. Twelve years ago, I joined a web studio in my hometown. That day marked the start of my career as a developer. Back then we had no Git, no CI/CD, no test bedsā€¦ And I saw how it impacted the growth of our team and business. We had to do a lot of trial and error, discover new practices, and implement them all on our own. Since then, Iā€™ve been a senior dev in a Russian financial holding and a German b2b startup. Iā€™ve also been a team lead in a food tech project, a CTO in educational projects for the Russian and LatAm marketsā€¦ And in most of those projects, I saw similar issues. I recently moved to Israel and found a position as a consultant for a startup. Guess what I found?.. Right.

I wrote this piece to offer you some examples of businesses shooting themselves in the leg (while any startupā€™s goal is to ā€œrun fastā€). Iā€™m also going to provide a chart Iā€™m using myself to help fix those issues. The chart was actually the reason for writing this text. You can find a link to it at the end of the article.

No Git; development done in production

A classic case: a project starts with a single dev. They make something, upload it via FTP, test it, and fix it right in prod. They might even have repositories, but those are often empty or outdated.

Then, the business scaling part begins, with more devs joining. A most curious thing often happens nextā€¦ For instance, one such dev who worked alone and ignored Git managed to wipe a couple daysā€™ worth of my work. Fortunately, Iā€™m using a version control system in every project I do. You probably know what wouldā€™ve happened to my code if Iā€™d deleted it. Now, I wanted to sell the ā€œyou need Git for backups and better scalingā€ idea to the team so I made some slides and presented them at a Friday meeting over some pizza. It worked! When our prod went down and the second dev was unavailable, I could at least understand what theyā€™d changed. The fix didnā€™t take much time.

Why you need git?

No code review

I once joined a project that had no original devs left. One of my tasks was to change the pagination method. Doesnā€™t seem too complicated, right? Just open the function and swap it out in a couple of minutes. But I couldnā€™t do that. My predecessor used a sophisticated approach: they copied a piece of code and introduced changes into it. That resulted in 200 entries and a manual correction that took 3 days. I think any code review done by a colleague wouldā€™ve prevented this from happening in the first place.

Thankfully, I had some time to sit down, think it through, and improve it. Hereā€™s a case where having no code review was a major blunder. So, itā€™s a new project. The CRM, this foundation of all business, was getting slow. It needed a fix, and the management was getting pushy. The dev decided to ignore all the standard practices like code review, testing, even autotests. They just pushed their code to Git and released it. The CRM simply went down. It was a common human mistake made under heavy stress and with a tight deadline. Having someone else check the commit wouldā€™ve been way safer. I believe you CAN ignore the autotests if theyā€™re taking too long and you have a critical situation at your hands, but you should never ignore code review.

No CI/CD

In a different project, the team released manually via git pull. The website would regularly go down. For example, they once simply forgot to do the package install. The support, the business, and even the users began questioning the team. Introducing CI/CD worked like a charm.

In another case, having Git and CI/CD played a major role in meeting the good old ā€œrelease fasterā€ demand. We needed to make a German version of the website ASAP, as a landing for potential investors. With an automated deploy process and a way to quickly make new test servers with CI/CD, we cut off the fluff, created dedicated DBs, uploaded the translations, and had a test bed for the investors in no time.

CI/CD benefits

No transparency in task descriptions

I ā€œlove itā€ when the task is buried somewhere in 150 chat messages. Or worse, in DMs.

it all went down. Again :)

We once had a big chat with call center employees. We had a QA team whose task was to screen for issues in that chat, ask for additional details, and create a task in a tracker if they managed to find a bug. However, the call center employees quickly realized that writing in the chat meant answering all those standard questions, sending screenshots or links, and doing other ā€œcomplicatedā€ things. If they wrote directly to some ā€œtester Jackā€ instead, theyā€™d fix it much quicker via admin access in most cases.

QA was confident they were doing the good thing, but in reality, we were losing money on all the manual labor and missed bugs. All hell broke loose once a certain worker left the company. We were flooded with an influx of new info and tiny, pesky bugs that took a lot of urgent effort to fix.

Once, though, we managed to prevent that from happening. We had this ā€œmajorā€ task that involved an integration with a big client. The manager decided to bypass all the processes and went to a dev directly to give them the task orally: ā€œJust do what I tell you, donā€™t overthink itā€. Thankfully, the team was already used to following the standard practice. We discussed the task on a daily and added it to the tracker. The research stage led to quite a few uncomfortable questions. As it turned out, we couldā€™ve leaked our entire commercial base if weā€™d followed this ā€œdonā€™t overthink itā€ way.

No ā€œ1 task = 1 code branchā€ rule

We were making a new large piece for a system with paying clients expecting it. The launch was promising a good profit. Each day of delay was eating into that profit. So, we decomposed the task into smaller, independent pieces so that we could release it faster and add the extra features in the process.

I created 10 tasks in the tracker, but the team decided to do them all in a single branch for some reason. So, the deadline passed, and we still couldnā€™t release: one of the minor tasks had a critical bug that couldnā€™t be fixed quickly. Had we had separate branches, we couldā€™ve released the other 9 tasks and launched the system. Thankfully, the team learned from this and switched to my proposed method. When I joined a different team, however, we had to go through this ā€œenlighteningā€ experience all over again.

No autotests introduced with growth

Some startups have a long life and build up legacy practices. Hereā€™s how I started using autotests: I joined projects with some history where testing was hard. I wanted something automatic to make sure my changes wouldnā€™t break anything.

I knew a dev once who wanted to ā€œrewrite everything from scratchā€ instead of doing autotests on legacy code. There was no convincing him otherwise, so I gave him a piece of code to rewrite. Once he released it, some non-crucial functionality of the website went down. The new code had to be discarded because restoring the old code was cheaper, faster, and simpler. Thankfully, he had a controlled environment to learn what post-mortem is. As a lesson of sorts, the dev then had to write autotests.

No error monitoring

Startups often have chats with early or active users. So, a support or a PM comes to you and says: ā€œHey, thereā€™s a thing that doesnā€™t work for our userā€, with details usually including wonderful descriptions like ā€œI donā€™t know what it isā€ or ā€œthis is it but I have no idea how it breaksā€ etc. If that feature works for you, trying to find out whatā€™s wrong and how it doesnā€™t work for that certain user is a pain. All you usually have is a cropped screenshot or just another ā€œit still wonā€™t work fix it ASAPā€ from that user.

Before we added a monitoring system, we could only learn our website was down in 10 minutes at best. Aaand it was the users who told us. With monitoring, we started fixing bugs before getting told about them. As a bonus, the logs helped us discover some extra bugs multiple times, like when an order was created on the website but not added to the base.

Monitoring errors with Sentry

Conclusion

You might ask: why are you still working for startups?! Well, I like forming streamlined processes from chaos. During my career, Iā€™ve noticed two major issues in such projects: they donā€™t work with people, and they donā€™t have automation. I see only two ways out. One, educate. Two, implement trusted tools and practices everywhere you can. If itā€™s really bad, start with a CI/CD release pipeline. That usually also involves using Git and adding code review. All together, itā€™s a great foundation for further improvements like pipelining, monitoring, etc.

Once you finish both, youā€™ll get a completely different speed and quality of development. I myself usually follow this chart. Discussion and additions are welcome. And of course, weā€™re all looking forward to your stories!

Top comments (1)

Collapse
 
maxnormand97 profile image
Max Normand • Edited

Good read šŸ‘ reminds me of my time in agencies (particularly the bit about code reviews šŸ˜‚)

I worked with a great senior who said to me once:
ā€œAn application is like a kitchen, keep it clean, otherwise customers will be eating šŸ’©ā€