As someone who is curious about creating better software development practices, DevOps is something that has piqued my interest. Luckily for me, I got the opportunity to participate in Google’s Professional DevOps Engineer specialization track on Coursera. This gave me insight into the world of development operations, cloud computing, and microservices architecture. Although it is a fantastic course it is a lot to take in and sometimes it did get a tad bit complicated. In these series of articles, I will be explaining from my personal view in a simplistic manner, DevOps, cloud computing, and microservice architecture as well as some of the underlying technologies.
Amazon web services define DevOps as a combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes.
This speed enables organizations to better serve their customers and compete more effectively in the market. DevOps is essentially a way of life. So any practice that aims to speed up and efficiently deliver quality software can be said as practicing DevOps.
To understand DevOps you have to understand what it came to solve. Typically in an organization, you will have a development team and an operations/deployment team. As the name implies a developer develops the application publishes it and then hands it over to the deployment team to push to the world, these two teams often work in two different environments/computers and there might often be cases of miscommunication between both parties, developer saying a particular application worked on the test but the operations person is saying it’s not working on live.
A dependency or library that was installed on the test environment was not installed on the live environment, so a particular feature that depends on that library does not work on live. As you can tell this is not very effective, and a lot of companies started asking themselves “can we make it better”, can we create an environment that is exactly like live and test?. How can we foster better communication between both parties?. How can we automate the testing and deployment process? This was what gave birth to the DevOps Model.
Under a DevOps model, development and operations teams are no longer isolated Sometimes, these two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations to basically to ensure that the application is shipped faster and more efficiently.
The benefits of this are improved application delivery, better reliability, speed in deploying applications, and better security.
ADOPTING THE DEVOPS MODEL
CHANGE OF MINDSET
First and foremost DevOps is a cultural mindset and it cuts across all members of the team. This means each member takes control of the whole lifecycle of the application, proper communication between all parties involved, and each person thinking of better ways to effectively serve customers. For developers, it means building applications with the end in mind, designing software that can scale, and is reliable and well tested. All the tooling in the world is useless without proper communication and responsibility between the parties involved.
Identify processes that can be automated, whether it is building, testing, publishing, or deploying applications there is always a way to automate. And when you use automated deploys to send thoroughly tested code to identically provisioned environments, “works on my machine!” becomes irrelevant.
Continuous Integration— This is a process where source code is updated to a central repository after which certain tests are run. The goal here is for faster bug discovery and resolution and also better code management.
Continuous delivery — This means always having a deployment-ready code that has passed the necessary tests. It builds upon the continuous integration model by deploying all code changes to a test environment that mirrors a live environment and running automated tests on it. ensuring that code is always ready for deployment. A simple product in the hands of customers today is worth more than a perfect product in the hands of customers 6months from now. Basically it means to develop and deploy the base model of your application and build on that continuously until it becomes a big application.
FUN FACT — Netflix deploys over 4000+ changes to live every day. 50,000 pipelines running millions of tasks every day. Deploying quicker, smaller code sets help with fewer surprises, better decision making, faster deployment, and faster error resolution.
This is closely linked to CI-CD. It entails that an application is broken into parts For example a social media application will have a user management service that will communicate with the news feed service this will in-turn communicate with the Machine learning service to recommend news to users. These different services can be built using different languages, running on separate individual processes coming together to form the full application. Each service runs in its own process and communicates which one another using an HTTP interface usually a WebAPI. This approach helps build highly scalable software applications and since they are somewhat isolated provide a level of abstraction and security.
Measurement and Monitoring
For something to be effective, it has to be measured. Measuring and monitoring mean using data to improve processes and make more informed decisions.
- How many users complained about the product this week? What did they complain about?
- How long did it take from development to deployment?
- How long does it take to recover from a system failure? Monitoring how software affects users by capturing and analyzing user data, as well as system logs, can help make informed decisions about what users want, the different pain-points, issues in the application, and where it needs improvement. These decisions can ultimately help build better updates and ultimately better software. Basically Your system should be able to explain itself.
Implementing DevOps is fundamentally a culture and mindset shift, one built on responsibility, automation, and making data-driven decisions to provide better value for the customer. but given the right tools and willingness from all parties involved it is possible.
Hope you enjoyed this and hopefully, it helped lay the foundation for your journey into DevOps, in the next article we will be exploring cloud computing. One of the tools that help make development and operations, easier, and more efficient. Till Next Time.
Top comments (0)