Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.
That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).
I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.
๐ฅ Software Engineer @bulbenergy, ๐ข Co-organizer @devitconf, ๐ค Host @devastationtv. From ๐ฌ๐ท, living in ๐ฌ๐ง, often traveling to ๐ช๐ธ for my โ๏ธ dose.
๐ฅ Software Engineer @bulbenergy, ๐ข Co-organizer @devitconf, ๐ค Host @devastationtv. From ๐ฌ๐ท, living in ๐ฌ๐ง, often traveling to ๐ช๐ธ for my โ๏ธ dose.
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.
I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.
Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.
If you are not using a version control service, then yes, you need to be ashamed.
I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...
Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.
Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.
Husband and Dad ๐จโ๐ฉโ๐งโ๐ฆ DevOps Engineer ๐ ๏ธ๐ค๐ค๐ผ Geek ๐น๏ธand Hobbyist coder ๐จ๐ปโ๐ป๐ฐ Always longing for the Internet of the 90s ๐พ I ๐ Python ๐.
The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.
I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.
We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.
I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.
My name is Matteo and I'm a cloud solution architect and tech enthusiast. In my spare time, I work on open source software as much as I can. I simply enjoy writing software that is actually useful.
We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.
Our standard deployment procedure is git push on a particular branch (usually master) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.
We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.
Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.
It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.
For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish ๐)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).
Once the code has been peer reviewed and QA Tested, we create a deployment folder that contains all project files and dependencies that are needed to perform the deployment.
We create a deployment ticket in TFS with instructions for the DevOp team on how to deploy it. Install this and delete that.
I sit and cross my finger. If all things goes well, they reply back with some feedback.
If the deployment fails, I usually have to work with DevOps on figuring out why and attempt to redeploy.
This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments ๐, but they are still trying set up the bolts and nuts for the whole operation.
instructions for the DevOp team on how to deploy it. Install this and delete that.
So, you have an operations team which is named devops?
I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.
It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.
Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.
After wards in the after:git:release hook of release-it I have a set of commands that does the following:
ssh to dev server & zip latest release & push to s3
ssh to live serverX & download latest release from s3 * unzip & do database migrations
This is quite practical as I just have to run release-it in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.
We still have a staging server as well for all ongoing testing.
At the day job we have several projects that are deployed independently using BuildKite.
For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.
For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.
There's more than one application which we serve at my company.
The first application uses a dated deployment, which goes like this:
Bring up the maintenance page.
Bring down all running web servers.
Migrate the database schema.
Bring up the web servers with the new release.
Remove the maintenance page.
There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.
The second application uses a rolling deployment, which goes like this:
Migrate the database schema.
Bring up the new web servers.
Add the new web servers to the load balancer.
Remove the old web servers from the load balancer.
There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.
To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:
Open a PR.
CI runs tests.
Code review.
Deploy to QA environment.
Changes are tested internally.
Deploy to UAT (user acceptance testing) environment.
Customer validates that changes are OK for production.
A Senior Developer working mostly with PHP and JavaScript, with a bit of Python thrown in for good measure, all on Linux. My tooling is simple, it's GitLab and JetBrains where possible.
Amazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.
Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes
Top comments (72)
Honestly - it's just FTP & manual database pushes ๐คทโโ๏ธ
It's not sophisticated or fancy, but it works.
Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.
That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).
I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.
I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.
It does, itโs called pipelines I think. Itโs pretty descent.
As far as I can tell pipelined is only available on bitbucket cloud, and not the self hosted version (bitbucket server) ? I'd love to be wrong though.
Ah ok, I don't know more about that.
No shame in not using โfancyโ CI tools. Whatever does the job.
Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.
I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.
Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.
If you are not using a version control service, then yes, you need to be ashamed.
I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...
AWS CodePipeline + AWS CodeDeploy + AWS CodeBuild
Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.
What stack? I have run into issues using NextJS with this deployment approach. TIA
Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.
I would love to get to this point with my job.
The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.
I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.
We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.
I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.
AWX should meet your needs , it s basically Tower for free and integrates with your existing ansible roles
github.com/ansible/awx
We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.
Our standard deployment procedure is
git pushon a particular branch (usuallymaster) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.
Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.
It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.
For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish ๐)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).
This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments ๐, but they are still trying set up the bolts and nuts for the whole operation.
So, you have an operations team which is named devops?
I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.
It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.
A simple process:
I use release-it
github.com/release-it/release-it
Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.
After wards in the
after:git:releasehook of release-it I have a set of commands that does the following:This is quite practical as I just have to run
release-itin the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.We still have a staging server as well for all ongoing testing.
Wow! I like your GitHub dark mode, can you share the extension, please!
you can use chrome dark reader extension
This is just the Github Desktop app:
desktop.github.com/
At the day job we have several projects that are deployed independently using BuildKite.
For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.
For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.
There's more than one application which we serve at my company.
The first application uses a dated deployment, which goes like this:
There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.
The second application uses a rolling deployment, which goes like this:
There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.
To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:
AWS + BuildKite Pipeline ( for Uploading, building and deployment)
How has your experience been with BuildKite? Do you like it?
I like it alot, easy to use and set-up.
git pullgit pullAmazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.
Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes