DEV Community

Cover image for What and who is DevOps?
mkdev.me for mkdev

Posted on • Edited on • Originally published at mkdev.me

What and who is DevOps?

In every decent successful project's life there comes a point when a number of servers starts growing rapidly. A server with an application ceases to cope with the load and you have to phase in another couple of servers and set a load balancer between them. A database which lived comfortably with the application on the server got larger and needs not just a separate machine, but one more for reliability and greater speed. Local theorists team finds out something about microservices, and now instead of one problem with a monolithic application you have many microproblems.

The number of servers went far beyond several dozens in a blink of an eye, every each of them need to be monitored, logged, protected from inner ("whoops, I've accidentally dropped the database") and outer threats.

A number of applied technologies grows after every meeting of programmers, who want to play with ElasticSearch, Elixir, Go, Lotus and only god knows what else.

Progress doesn't stand still: there is hardly a month without important updates of current software and operating system. You have just got used to SysVinit, but now they say you need to use systemd instead. And they are actually right.

Only couple of years ago in order to deal with this infrastructure growth you needed just several system administrators skilled enough in bash-scripting and manual hardware configuration. Now you need a couple of them each week to control hundreds of machines. Or search for alternative solutions.

A system administrator, or sysadmin, is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems; especially multi-user computers, such as servers.

All these problems are not new – skilled system administrators have learned programming and automated everything to the maximum. Now thanks to them we have instruments like Chef and Puppet. But now we faced another problem: not all the sysadmins were skilled enough to retrain and become real software engineers of complex infrastructures.

Moreover, programmers who still do not know much about what happens with their applications after they are deployed, stubbornly continue to blame sysadmins for new versions of software eating all the CPU and opening doors to all the hackers in the world. "My code is perfect, you just can't simply tune the servers" – they say.

In this complicated situation engineers and those who just sympathize them had to start outreach activities. And how can one be successful in them without a catchword? That is how DevOps was born – a marketing term, which causes different associations in people's minds from "inner company culture" to "jack of all trades".

Originally, DevOps did not have much in common with any position in an organization. Many people still state that DevOps is a culture, not a profession, according to which there have to be close ties between developers and system administrators.

A developer should have an understanding of an infrastructure he works with and be able to find out why a new feature, which runs on a laptop, suddenly puts down a half of a data center. Such kind of knowledge helps to prevent many conflicts: a programmer who knows how servers work will never shift the responsibility to a system administrator.

DevOps area also includes topics like Continuous Integration, Continuous Delivery etc.

DevOps has naturally changed from "culture" and "ideology" to "profession". A number of vacancies with this word in their names grows rapidly. But what do recruiters and companies expect from a DevOps engineer? They often expect a mix of skills which includes system administration, programming, using cloud technologies and a large infrastructure automation.

It means that it is not enough to be a good programmer. One should also have to be well informed about networks, operating systems, virtualization, providing security and resiliency of a system, as well as several other technologies ranging from common and time proven things, like iptables and SELinux, to fresh and trendy technologies like Chef, Puppet and even Ansible.

At this point a careful reader-programmer would say:

It is silly to think that a programmer who already has tons of tasks in a project will learn so many new things about infrastructure and a system's architecture in general.

Another careful reader-sysadmin would say:

I am good at recompiling Linux kernel and network configuring, why do I need to learn programming, why do I need your Chef, git and other weird stuff?

We would answer the following way: a real engineer is not the one who knows Ruby, Go, Bash or "network configuration", but the one who can build complex, beautiful, automated and safe systems, understand the whole life cycle from the lowest level up to an HTML-pages generation and sending them to a browser.

Surely we partly agree that one cannot be absolute professional in all IT spheres in every moment of time. But DevOps is not only about people doing everything well. It is also about maximal ignorance eradication on both sides of the fence (in fact, one team), whether you are a tired of manual work sysadmin or a developer praying on AWS.

In this series of articles we will learn about basic instruments and technologies of a modern DevOps-engineer step by step.

A developer who wants to know more about life of his code after deploy will obtain necessary details and get basic understanding of the whole ecosystem, thus becoming more than just a Ruby/Scala/Go-developer with Ansible skills.

Young (and not that young) minds, willing to do DevOps, will get a vision of how everything works and necessary guidance for further learning. Afterwards, they will be able to easily maintain up to two dozens of organizations at once and make developers and sysadmins become friends.

System administrators who got bored on their job will learn a few new instruments, which will help them remain in-demand professionals in the age of cloud technologies and total automation of infrastructures of different scales.

You will need Linux for the course. We strongly insist on Red Hat distribution. The author of the article uses Fedora 27 Workstation as the main system, and mkdev servers run on Centos 7.

In the next article we will get an overview of a virtualization: we will learn what it is needed for and how to use it.


This is an mkdev article written by Kirill Shirinkin. You can always hire our DevOps mentors and become a DevOps engineer yourself!

Top comments (0)