Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.
That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).
I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.
I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.
Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.
If you are not using a version control service, then yes, you need to be ashamed.
I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...
Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.
Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.
The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.
I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.
We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.
I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.
My name is Matteo and I'm a cloud solution architect and tech enthusiast. In my spare time, I work on open source software as much as I can. I simply enjoy writing software that is actually useful.
We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.
Our standard deployment procedure is git push on a particular branch (usually master) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.
We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.
Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.
It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.
For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish 😄)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).
Once the code has been peer reviewed and QA Tested, we create a deployment folder that contains all project files and dependencies that are needed to perform the deployment.
We create a deployment ticket in TFS with instructions for the DevOp team on how to deploy it. Install this and delete that.
I sit and cross my finger. If all things goes well, they reply back with some feedback.
If the deployment fails, I usually have to work with DevOps on figuring out why and attempt to redeploy.
This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments 😄, but they are still trying set up the bolts and nuts for the whole operation.
instructions for the DevOp team on how to deploy it. Install this and delete that.
So, you have an operations team which is named devops?
I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.
It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.
There's more than one application which we serve at my company.
The first application uses a dated deployment, which goes like this:
Bring up the maintenance page.
Bring down all running web servers.
Migrate the database schema.
Bring up the web servers with the new release.
Remove the maintenance page.
There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.
The second application uses a rolling deployment, which goes like this:
Migrate the database schema.
Bring up the new web servers.
Add the new web servers to the load balancer.
Remove the old web servers from the load balancer.
There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.
To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:
Open a PR.
CI runs tests.
Code review.
Deploy to QA environment.
Changes are tested internally.
Deploy to UAT (user acceptance testing) environment.
Customer validates that changes are OK for production.
Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.
After wards in the after:git:release hook of release-it I have a set of commands that does the following:
ssh to dev server & zip latest release & push to s3
ssh to live serverX & download latest release from s3 * unzip & do database migrations
This is quite practical as I just have to run release-it in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.
We still have a staging server as well for all ongoing testing.
At the day job we have several projects that are deployed independently using BuildKite.
For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.
For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.
For our web app, I would merge changes into master, pull the changes into my local and use rsync to sync between the files in my local and the files that are in our staging server. After testing, I would sync between the files in my local and our production server.
It works well enough, but it's annoying when I have to deploy a quick fix and there are changes in staging that are not yet tested or ready for production. When that happens, I'd revert that MR and pull again, only if it's an MR with a lot of changes. Otherwise, I do it manually on production but be sure to create an MR for it that is merged and pushed to staging so that the next time I deploy to production the fix doesn't get lost.
I really want to change this deployment process because I don't have a lot of trust in it, so hopefully when I have some time.
A Senior Developer working mostly with PHP and JavaScript, with a bit of Python thrown in for good measure, all on Linux. My tooling is simple, it's GitLab and JetBrains where possible.
Amazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.
Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes
developer commits change to feature branch locally
developer pushed code to GitLab
triggers a pipeline of tasks
development team reviews
branch merged to target / environment branch
branch is deployed to environment
personnel responsible for the environment confirms changes
branch is merged to master
On deployment day, master is deployed to production
We are trying to move to a more development -> new ephemeral environment per branch -> integration -> production deployment process. That is our current goal to give the development team more flexibility in there workflow.
At Work we use Bitbucket and Jenkins to push into Google's cloud services.
For private projects I try out all sorts of things. One site is pushed manually by FTP, one has GitLab CI, one is on GitHub and Travis... I think I like GitLab most, because it's one integrated and very versatile solution.
Jenkins with GitFlow for larger, high-risk products that require more gates to be crossed, and plain ol' jenkins plus github hooks to automatically build and deploy for smaller products and products with less risk.
Whatever works for you, the tool chain should match the need!
Our frontend is built out on react and hugo. So we use netlify to deploy on every push to master.
We have written scripts to upload and deploy our backend code in docker containers. We have an in-house tool (its open source btw) which takes those containers and runs them on kubernetes or docker.
For us, it's all about PowerShell. We write our scripts manually, using mostly built-in PS cmdlets. Our projects always have one for build (which gathers all dependencies, restores etc.) and one for deploy.
I've made Jenkins jobs for my client, separate jobs for production and development build/deployment, plus there are several automated server maintenance tasks too.
Though I'm not sure I'd recommend this for someone who's not fluent with Jenkins - it can be an overhead to learn and the market have already shifted from it, to more simpler automation solutions like github actions and so on. Still it's a great and super powerful tool.
I learned how to code at university, so I've been at it since 2014. I've dabbled in open source contributions but would like to get into it more. Other than 1's and 0's, I love to travel.
PR merged to master kicks off a webhook to CodeBuild which builds and places artifacts in an S3 bucket. CodePipeline is triggered by an update to that artifact in S3 which waits for a manual approval. Then our infrastructure is built or updated by CloudFormation template if the creation or update is required based on the templates checked into Github. The code contained in the infrastructure is updated along side all of this. I really wanted to go for the whole DevOps thing.
We had an internal tool that generated a Git repo with a hello world app built with the selected language/framework (Node, Springboot, Angular....), and it also generated Jenkins pipelines for you so as soon as you had something you wanted to test in DEV, you only had to push the code to the repo and the pipelines deployed it for you.
Super nice setup, both for people that don't know about Jenkins and CI/CD and for people that are more advanced as we were able to modify the pipelines as we wish.
People here just edit the files directly from WinSCP 🤷♀️, But I'm insisting all to implement CI/CD via GitHub actions and use SSH/SFTP based deployment to Production / Canary / Staging.
The project I work on at my company runs on Kubernetes. We haven't had a CI/CD pipeline set up yet. Most of the time we execute a script that will build a new image of the service and apply changes to the deployment automatically.
The next project we'll be working on is going to be hosted by a standalone VM. All different environments are going to sit there. We're planning to use Portainer to manage the Docker containers. For me it seems like a lot of manually work.
Anyone having the same developing experience please share some stories. I'd love to hear them!
Tech Lead/Team Lead. Senior WebDev.
Intermediate Grade on Computer Systems-
High Grade on Web Application Development-
MBA (+Marketing+HHRR).
Studied a bit of law, economics and design
Location
Spain
Education
Higher Level Education Certificate on Web Application Development
Git with CI/CD.
It really takes like 10/15min to create a GitLab project, set the CI script and test.
Even an old project can be set on a Git project and implemented with an automated script when a merge request to Master is approved. It's not difficult (when you did once before to train yourself).
If you're on a legacy project that needs several actions when deployed, you could automate it too on CI script, but if the actions are conditional (i.e. if i push a controller override then delete server cache, otherwise don't do that) you'll need to perform this actions manually i think.
For the Frontend codebase, when I merge to master code is deployed to production and staging.
For the Backend codebase, I manually create a GitHub release to trigger a deployment to production. This is because we need to do security checks in staging before the production deployment.
For the "simple" cases with docker:
When a commit is pushed to staging branch, it builds and pushs to docker hub the docker image. Then it triggers a bash script on the staging server which pull and replace the image.
As we tag the image with commit ID, rollback is either 1 line to run on the server, or git push a previous commit.
For production, same workflow but it asks for a manual click to deploy.
At our company we have 2 separated environments (PRE/PROD), we have a couple of different pipelines:
For PRE we do CI/CD with GH Actions, anything we push to master (if it passes the tests) we build a docker image, then the action calls a webhook wich tells Portainer that it should update the container with the newly built image.
For PROD it's quite similar, the only difference is we only deploy when a Release is published (we don't like CI/CD for production).
We previously used Jenkins, and before that, we uploaded with SSH (does the job :P).
Github push ->
Jenkins CI Build docker images ->
Terraform (or CloudFormation, depends on stack) apply if changed infra ->
New stack spin up ->
Switch LB to new stack when new code is green ->
Cleanup old stack
We use jenkins. Everytime someone create a branch and push something jenkins run all the tests and build that branch independently. and if everything is okay. We create a artifact as gzip and upload it to aws S3.
Then everytime someone want to check the changes on that branch, we have a separate page in jenkins that we can say which environment you need to deploy specific branch. You can select a environment by a dropdown and need to type banch name. Then it will download the changes from S3 and extract it and copy changes with rsync. Then it will restart the process. Within 5-10 seconds. You are ready! We only allow prod to deploy master branch! Within this approch we can test any branch at any test env. So you don't want to have staging, dev etc branches other than the master!
SSH -> (of course; some services still need that hand touch)
It's up to project owner to define release strategy, usually they use environment specific deployment branches and version tag for pushing an official production release.
On-prem GitLab for the building of artifacts and Dockerizing of the application for the CI portion. No true CD atm. Currently it’s a manual canary strategy followed by manual feature testing against the canary and then a rolling deployment once verified.
Well just compile your vb.net, yes you read right vb.net, and use beyond compare to copy the dlls and executables to the servers. If you have to do something with databases u need to wait for an specific day and time to request a database change, if is urgent which is pretty everyday the only person whom could give you his blessing is the it director. To my bad luck I dont have that sense of urgency that everyone talks.
We’re using Jenkins to CI/CD and to publish Docker containers in our Swarm repository. Projects in Node.js, my team started to use Github Actions to CI/CD and publish on npm.
Java Web Developer with a passion for Spring and cloud computing. Know a thing or two about AWS. Trying to learn NodeJS lately with the help of TypeScript.
Honestly - it's just FTP & manual database pushes 🤷♀️
It's not sophisticated or fancy, but it works.
Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.
That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).
I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.
I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.
It does, it’s called pipelines I think. It’s pretty descent.
As far as I can tell pipelined is only available on bitbucket cloud, and not the self hosted version (bitbucket server) ? I'd love to be wrong though.
Ah ok, I don't know more about that.
No shame in not using “fancy” CI tools. Whatever does the job.
Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.
I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.
Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.
If you are not using a version control service, then yes, you need to be ashamed.
I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...
AWS CodePipeline + AWS CodeDeploy + AWS CodeBuild
Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.
What stack? I have run into issues using NextJS with this deployment approach. TIA
Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.
I would love to get to this point with my job.
The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.
I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.
We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.
I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.
AWX should meet your needs , it s basically Tower for free and integrates with your existing ansible roles
github.com/ansible/awx
We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.
Our standard deployment procedure is
git push
on a particular branch (usuallymaster
) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.
Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.
It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.
For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish 😄)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).
This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments 😄, but they are still trying set up the bolts and nuts for the whole operation.
So, you have an operations team which is named devops?
I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.
It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.
There's more than one application which we serve at my company.
The first application uses a dated deployment, which goes like this:
There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.
The second application uses a rolling deployment, which goes like this:
There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.
To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:
A simple process:
I use release-it
github.com/release-it/release-it
Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.
After wards in the
after:git:release
hook of release-it I have a set of commands that does the following:This is quite practical as I just have to run
release-it
in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.We still have a staging server as well for all ongoing testing.
Wow! I like your GitHub dark mode, can you share the extension, please!
This is just the Github Desktop app:
desktop.github.com/
you can use chrome dark reader extension
At the day job we have several projects that are deployed independently using BuildKite.
For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.
For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.
AWS + BuildKite Pipeline ( for Uploading, building and deployment)
How has your experience been with BuildKite? Do you like it?
I like it alot, easy to use and set-up.
For our web app, I would merge changes into master, pull the changes into my local and use
rsync
to sync between the files in my local and the files that are in our staging server. After testing, I would sync between the files in my local and our production server.It works well enough, but it's annoying when I have to deploy a quick fix and there are changes in staging that are not yet tested or ready for production. When that happens, I'd revert that MR and pull again, only if it's an MR with a lot of changes. Otherwise, I do it manually on production but be sure to create an MR for it that is merged and pushed to staging so that the next time I deploy to production the fix doesn't get lost.
I really want to change this deployment process because I don't have a lot of trust in it, so hopefully when I have some time.
At work?
ssh
,cp
,vim
, hope for the best. We have automated backups, but no source versioning, or CD of any kind.My portfolio I'm working on uses Gitlab-CI to build docker (compose) containers, test, and deploy them.
git pull
git pull
Amazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.
Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes
We are trying to move to a more development -> new ephemeral environment per branch -> integration -> production deployment process. That is our current goal to give the development team more flexibility in there workflow.
At Work we use Bitbucket and Jenkins to push into Google's cloud services.
For private projects I try out all sorts of things. One site is pushed manually by FTP, one has GitLab CI, one is on GitHub and Travis... I think I like GitLab most, because it's one integrated and very versatile solution.
Jenkins with GitFlow for larger, high-risk products that require more gates to be crossed, and plain ol' jenkins plus github hooks to automatically build and deploy for smaller products and products with less risk.
Whatever works for you, the tool chain should match the need!
Our frontend is built out on react and hugo. So we use netlify to deploy on every push to master.
We have written scripts to upload and deploy our backend code in docker containers. We have an in-house tool (its open source btw) which takes those containers and runs them on kubernetes or docker.
For us, it's all about PowerShell. We write our scripts manually, using mostly built-in PS cmdlets. Our projects always have one for build (which gathers all dependencies, restores etc.) and one for deploy.
I've made Jenkins jobs for my client, separate jobs for production and development build/deployment, plus there are several automated server maintenance tasks too.
Though I'm not sure I'd recommend this for someone who's not fluent with Jenkins - it can be an overhead to learn and the market have already shifted from it, to more simpler automation solutions like github actions and so on. Still it's a great and super powerful tool.
AWS Amplify for the frontend app and our serverless backend.
Previously it was a bunch of scp scripting, but NOW it's a bunch of Jenkins scripting
PR merged to master kicks off a webhook to CodeBuild which builds and places artifacts in an S3 bucket. CodePipeline is triggered by an update to that artifact in S3 which waits for a manual approval. Then our infrastructure is built or updated by CloudFormation template if the creation or update is required based on the templates checked into Github. The code contained in the infrastructure is updated along side all of this. I really wanted to go for the whole DevOps thing.
As a consultant, we’ve implemented and worked with a lot of different CI/CD stacks. But most of them are a variant of..
Bunch of auxiliary tasks and checks on top of that basic setup.
We had an internal tool that generated a Git repo with a hello world app built with the selected language/framework (Node, Springboot, Angular....), and it also generated Jenkins pipelines for you so as soon as you had something you wanted to test in DEV, you only had to push the code to the repo and the pipelines deployed it for you.
Super nice setup, both for people that don't know about Jenkins and CI/CD and for people that are more advanced as we were able to modify the pipelines as we wish.
Sad to say!
People here just edit the files directly from WinSCP 🤷♀️, But I'm insisting all to implement CI/CD via GitHub actions and use SSH/SFTP based deployment to Production / Canary / Staging.
We zero-downtime deploy our applications (mostly PHP - Laravel or WordPress - but not only) with deployer.org/.
Very easy, fast and reliable.
This is awesome! had never heard of it. Will give it a shot with Craft CMS (yii2) base platform.
The project I work on at my company runs on Kubernetes. We haven't had a CI/CD pipeline set up yet. Most of the time we execute a script that will build a new image of the service and apply changes to the deployment automatically.
The next project we'll be working on is going to be hosted by a standalone VM. All different environments are going to sit there. We're planning to use Portainer to manage the Docker containers. For me it seems like a lot of manually work.
Anyone having the same developing experience please share some stories. I'd love to hear them!
Git with CI/CD.
It really takes like 10/15min to create a GitLab project, set the CI script and test.
Even an old project can be set on a Git project and implemented with an automated script when a merge request to Master is approved. It's not difficult (when you did once before to train yourself).
If you're on a legacy project that needs several actions when deployed, you could automate it too on CI script, but if the actions are conditional (i.e. if i push a controller override then delete server cache, otherwise don't do that) you'll need to perform this actions manually i think.
For the Frontend codebase, when I merge to master code is deployed to production and staging.
For the Backend codebase, I manually create a GitHub release to trigger a deployment to production. This is because we need to do security checks in staging before the production deployment.
For the "simple" cases with docker:
When a commit is pushed to staging branch, it builds and pushs to docker hub the docker image. Then it triggers a bash script on the staging server which pull and replace the image.
As we tag the image with commit ID, rollback is either 1 line to run on the server, or git push a previous commit.
For production, same workflow but it asks for a manual click to deploy.
Before everything we always do a PR Review.
Thereafter:
The most laravel projects:
Other projects (Drupal, WP) and/or tiny ones:
At our company we have 2 separated environments (PRE/PROD), we have a couple of different pipelines:
For PRE we do CI/CD with GH Actions, anything we push to master (if it passes the tests) we build a docker image, then the action calls a webhook wich tells Portainer that it should update the container with the newly built image.
For PROD it's quite similar, the only difference is we only deploy when a Release is published (we don't like CI/CD for production).
We previously used Jenkins, and before that, we uploaded with SSH (does the job :P).
Github push ->
Jenkins CI Build docker images ->
Terraform (or CloudFormation, depends on stack) apply if changed infra ->
New stack spin up ->
Switch LB to new stack when new code is green ->
Cleanup old stack
We use jenkins. Everytime someone create a branch and push something jenkins run all the tests and build that branch independently. and if everything is okay. We create a artifact as gzip and upload it to aws S3.
Then everytime someone want to check the changes on that branch, we have a separate page in jenkins that we can say which environment you need to deploy specific branch. You can select a environment by a dropdown and need to type banch name. Then it will download the changes from S3 and extract it and copy changes with rsync. Then it will restart the process. Within 5-10 seconds. You are ready! We only allow prod to deploy master branch! Within this approch we can test any branch at any test env. So you don't want to have staging, dev etc branches other than the master!
At work we're doing either of the following:
It's up to project owner to define release strategy, usually they use environment specific deployment branches and version tag for pushing an official production release.
On-prem GitLab for the building of artifacts and Dockerizing of the application for the CI portion. No true CD atm. Currently it’s a manual canary strategy followed by manual feature testing against the canary and then a rolling deployment once verified.
Well just compile your vb.net, yes you read right vb.net, and use beyond compare to copy the dlls and executables to the servers. If you have to do something with databases u need to wait for an specific day and time to request a database change, if is urgent which is pretty everyday the only person whom could give you his blessing is the it director. To my bad luck I dont have that sense of urgency that everyone talks.
Cheers!!
Push docker images to Docker Hub on merge to master, then
docker-compose up
on the server.For the current project I am building
docker build
docker push
k ... replicas=0
k ... replica=1
For the one that's in production we have CI/CD with cronos and ansible scripts
For my day job, we upload manually to AWS.
For my freelance gigs, I have continuous deployment set up with GitHub and Netlify.
Code gets attached to a tennis ball then thrown into a hockey rink and we hope the players hit it the right direction (sometimes it's felt like that!)
Gitlab CI/CD at my “Financial” Support Org.
Github Actions at BarelyHuman
We’re using Jenkins to CI/CD and to publish Docker containers in our Swarm repository. Projects in Node.js, my team started to use Github Actions to CI/CD and publish on npm.
We just push from devenv to master repo, then pull to staging env, and then pull to prod.
For the current project, working with cloud functions primarily. Bitbucket pipelines + GCP
SSH in the server and run “git pull origin master”
Just the netlify cli for static sites and heroku cli for dynamic sites.
Done both manual objects deployment and CD process.
Uploading war files in jboss servers. We transitioned to do the same via jenkins server.
We're big fans of DeployBot. We use Digital Ocean and Bitbucket so it makes it really simple to deploy from BB into DO.
We use GoCD to handle the deployment, from Bitbucket, our repository, to our Docker clusters on AWS.
Terraform + Github CD/CI > GCP