DEV Community

loading...

How does deployment work at your organization?

Ben Halpern
A Canadian software developer who thinks he’s funny. He/Him.
・1 min read

What is the process to get code into prod?

Discussion (73)

Collapse
nataliedeweerd profile image
𝐍𝐚𝐭𝐚𝐥𝐢𝐞 𝐝𝐞 𝐖𝐞𝐞𝐫𝐝 • Edited

Honestly - it's just FTP & manual database pushes 🤷‍♀️
It's not sophisticated or fancy, but it works.

Collapse
nicolus profile image
Nicolas Bailly • Edited

Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.

That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).

I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.

Collapse
joelbonetr profile image
JoelBonetR

I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.

Collapse
kbariotis profile image
Kostas Bariotis

It does, it’s called pipelines I think. It’s pretty descent.

Thread Thread
nicolus profile image
Nicolas Bailly

As far as I can tell pipelined is only available on bitbucket cloud, and not the self hosted version (bitbucket server) ? I'd love to be wrong though.

Thread Thread
kbariotis profile image
Kostas Bariotis

Ah ok, I don't know more about that.

Collapse
ben profile image
Ben Halpern Author

No shame in not using “fancy” CI tools. Whatever does the job.

Collapse
joelbonetr profile image
JoelBonetR

Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.

I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.

Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.

If you are not using a version control service, then yes, you need to be ashamed.

Collapse
felipperegazio profile image
Felippe Regazio

I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...

Collapse
andrewbrown profile image
Andrew Brown 🇨🇦

AWS CodePipeline + AWS CodeDeploy + AWS CodeBuild

Collapse
rinzler profile image
Rinzler

Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.

Collapse
fvaldes33 profile image
Franco Valdes

What stack? I have run into issues using NextJS with this deployment approach. TIA

Collapse
andrewbrown profile image
Andrew Brown 🇨🇦

Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.

Collapse
peiche profile image
Paul

I would love to get to this point with my job.

Collapse
jep profile image
Jim • Edited

The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.

I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.

We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.

I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.

Collapse
shenril profile image
Shenril • Edited

AWX should meet your needs , it s basically Tower for free and integrates with your existing ansible roles
github.com/ansible/awx

Collapse
matteojoliveau profile image
Matteo Joliveau

We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.

Our standard deployment procedure is git push on a particular branch (usually master) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.

We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.

Collapse
benmechen profile image
Ben Mechen

Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.

Collapse
htnguy profile image
Hieu Nguyen • Edited

It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.

For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish 😄)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).

  1. Once the code has been peer reviewed and QA Tested, we create a deployment folder that contains all project files and dependencies that are needed to perform the deployment.
  2. We create a deployment ticket in TFS with instructions for the DevOp team on how to deploy it. Install this and delete that.
  3. I sit and cross my finger. If all things goes well, they reply back with some feedback.
  4. If the deployment fails, I usually have to work with DevOps on figuring out why and attempt to redeploy.

This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments 😄, but they are still trying set up the bolts and nuts for the whole operation.

Collapse
jessekphillips profile image
Jesse Phillips • Edited

instructions for the DevOp team on how to deploy it. Install this and delete that.

So, you have an operations team which is named devops?

I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.

It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.

Collapse
crimsonmed profile image
Médéric Burlet

A simple process:

I use release-it
github.com/release-it/release-it

Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.

changelog

After wards in the after:git:release hook of release-it I have a set of commands that does the following:

  • ssh to dev server & zip latest release & push to s3
  • ssh to live serverX & download latest release from s3 * unzip & do database migrations

This is quite practical as I just have to run release-it in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.

We still have a staging server as well for all ongoing testing.

Collapse
yo profile image
Yogi

Wow! I like your GitHub dark mode, can you share the extension, please!

Collapse
divee789 profile image
Divine Olokor

you can use chrome dark reader extension

Collapse
crimsonmed profile image
Médéric Burlet

This is just the Github Desktop app:
desktop.github.com/

Collapse
kildareflare profile image
Rich Field

At the day job we have several projects that are deployed independently using BuildKite.

For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.

For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.

Collapse
aghost7 profile image
Jonathan Boudreau • Edited

There's more than one application which we serve at my company.

The first application uses a dated deployment, which goes like this:

  1. Bring up the maintenance page.
  2. Bring down all running web servers.
  3. Migrate the database schema.
  4. Bring up the web servers with the new release.
  5. Remove the maintenance page.

There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.

The second application uses a rolling deployment, which goes like this:

  1. Migrate the database schema.
  2. Bring up the new web servers.
  3. Add the new web servers to the load balancer.
  4. Remove the old web servers from the load balancer.

There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.

To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:

  1. Open a PR.
  2. CI runs tests.
  3. Code review.
  4. Deploy to QA environment.
  5. Changes are tested internally.
  6. Deploy to UAT (user acceptance testing) environment.
  7. Customer validates that changes are OK for production.
  8. Deploy to production.
Collapse
techtalks profile image
Ankit Kumar • Edited

AWS + BuildKite Pipeline ( for Uploading, building and deployment)

Collapse
molly profile image
Molly Struve (she/her)

How has your experience been with BuildKite? Do you like it?

Collapse
techtalks profile image
Ankit Kumar

I like it alot, easy to use and set-up.

Collapse
yujiri8 profile image
Ryan Westlund

At mine we build on a dev server, then use a shell script to deploy it to prod, then on the prod server use a script to create a new template, then we have the admin web panel (I wrote :)) running on prod that we visit and upgrade the deployments individually (they use FreeBSD jails).

Collapse
iamschulz profile image
I am Schulz

At Work we use Bitbucket and Jenkins to push into Google's cloud services.
For private projects I try out all sorts of things. One site is pushed manually by FTP, one has GitLab CI, one is on GitHub and Travis... I think I like GitLab most, because it's one integrated and very versatile solution.

Collapse
david_j_eddy profile image
David J Eddy • Edited
  • developer commits change to feature branch locally
  • developer pushed code to GitLab
    • triggers a pipeline of tasks
  • development team reviews
  • branch merged to target / environment branch
  • branch is deployed to environment
  • personnel responsible for the environment confirms changes
  • branch is merged to master
  • On deployment day, master is deployed to production

We are trying to move to a more development -> new ephemeral environment per branch -> integration -> production deployment process. That is our current goal to give the development team more flexibility in there workflow.

Collapse
dmahely profile image
Doaa Mahely

For our web app, I would merge changes into master, pull the changes into my local and use rsync to sync between the files in my local and the files that are in our staging server. After testing, I would sync between the files in my local and our production server.

It works well enough, but it's annoying when I have to deploy a quick fix and there are changes in staging that are not yet tested or ready for production. When that happens, I'd revert that MR and pull again, only if it's an MR with a lot of changes. Otherwise, I do it manually on production but be sure to create an MR for it that is merged and pushed to staging so that the next time I deploy to production the fix doesn't get lost.

I really want to change this deployment process because I don't have a lot of trust in it, so hopefully when I have some time.

Collapse
patryktech profile image
Patryk

At work? ssh, cp, vim, hope for the best. We have automated backups, but no source versioning, or CD of any kind.

My portfolio I'm working on uses Gitlab-CI to build docker (compose) containers, test, and deploy them.

Collapse
_garybell profile image
Gary Bell
  • SSH to one server. Set node to offline.
  • SSH to other server. Set site to offline.
  • On SSH for first server, do git pull
  • On second server, do git pull
  • If needed, manually apply database changes.
  • On first server, put node online
  • Hope everything works

Amazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.

Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes

Collapse
ryantenorio profile image
Ryan

Jenkins with GitFlow for larger, high-risk products that require more gates to be crossed, and plain ol' jenkins plus github hooks to automatically build and deploy for smaller products and products with less risk.

Whatever works for you, the tool chain should match the need!

Collapse
pim profile image
Pim

For us, it's all about PowerShell. We write our scripts manually, using mostly built-in PS cmdlets. Our projects always have one for build (which gathers all dependencies, restores etc.) and one for deploy.

Collapse
pencillr profile image
Richard Lenkovits

I've made Jenkins jobs for my client, separate jobs for production and development build/deployment, plus there are several automated server maintenance tasks too.

Though I'm not sure I'd recommend this for someone who's not fluent with Jenkins - it can be an overhead to learn and the market have already shifted from it, to more simpler automation solutions like github actions and so on. Still it's a great and super powerful tool.

Collapse
elisealcala profile image
Elizabeth Alcalá

AWS Amplify for the frontend app and our serverless backend.

Collapse
yourtechbud profile image
Noorain Panjwani

Our frontend is built out on react and hugo. So we use netlify to deploy on every push to master.

We have written scripts to upload and deploy our backend code in docker containers. We have an in-house tool (its open source btw) which takes those containers and runs them on kubernetes or docker.

Collapse
xanderyzwich profile image
Corey McCarty

Previously it was a bunch of scp scripting, but NOW it's a bunch of Jenkins scripting

Collapse
thomasstep profile image
Thomas Step

PR merged to master kicks off a webhook to CodeBuild which builds and places artifacts in an S3 bucket. CodePipeline is triggered by an update to that artifact in S3 which waits for a manual approval. Then our infrastructure is built or updated by CloudFormation template if the creation or update is required based on the templates checked into Github. The code contained in the infrastructure is updated along side all of this. I really wanted to go for the whole DevOps thing.

Collapse
dizveloper profile image
Edvin

As a consultant, we’ve implemented and worked with a lot of different CI/CD stacks. But most of them are a variant of..

  • I push to Git
  • CI (probably Jenkins) builds and does all the checks and pushes the artifact to the respective artifactory
  • Jenkins pushes the artifact/container image to whatever implementation of k8s

Bunch of auxiliary tasks and checks on top of that basic setup.

Collapse
uf4no profile image
Antonio

We had an internal tool that generated a Git repo with a hello world app built with the selected language/framework (Node, Springboot, Angular....), and it also generated Jenkins pipelines for you so as soon as you had something you wanted to test in DEV, you only had to push the code to the repo and the pipelines deployed it for you.
Super nice setup, both for people that don't know about Jenkins and CI/CD and for people that are more advanced as we were able to modify the pipelines as we wish.

Collapse
yo profile image
Yogi

Sad to say!

People here just edit the files directly from WinSCP 🤷‍♀️, But I'm insisting all to implement CI/CD via GitHub actions and use SSH/SFTP based deployment to Production / Canary / Staging.

Collapse
eleftrik profile image
Erik D'Ercole

We zero-downtime deploy our applications (mostly PHP - Laravel or WordPress - but not only) with deployer.org/.
Very easy, fast and reliable.

Collapse
fvaldes33 profile image
Franco Valdes

This is awesome! had never heard of it. Will give it a shot with Craft CMS (yii2) base platform.

Collapse
hellovietduc profile image
Duc Nguyen

The project I work on at my company runs on Kubernetes. We haven't had a CI/CD pipeline set up yet. Most of the time we execute a script that will build a new image of the service and apply changes to the deployment automatically.

The next project we'll be working on is going to be hosted by a standalone VM. All different environments are going to sit there. We're planning to use Portainer to manage the Docker containers. For me it seems like a lot of manually work.

Anyone having the same developing experience please share some stories. I'd love to hear them!

Collapse
joelbonetr profile image
JoelBonetR

Git with CI/CD.
It really takes like 10/15min to create a GitLab project, set the CI script and test.

Even an old project can be set on a Git project and implemented with an automated script when a merge request to Master is approved. It's not difficult (when you did once before to train yourself).

If you're on a legacy project that needs several actions when deployed, you could automate it too on CI script, but if the actions are conditional (i.e. if i push a controller override then delete server cache, otherwise don't do that) you'll need to perform this actions manually i think.

Collapse
sergiodxa profile image
Sergio Daniel Xalambrí

For the Frontend codebase, when I merge to master code is deployed to production and staging.

For the Backend codebase, I manually create a GitHub release to trigger a deployment to production. This is because we need to do security checks in staging before the production deployment.

Collapse
asteryujano profile image
François

For the "simple" cases with docker:
When a commit is pushed to staging branch, it builds and pushs to docker hub the docker image. Then it triggers a bash script on the staging server which pull and replace the image.
As we tag the image with commit ID, rollback is either 1 line to run on the server, or git push a previous commit.
For production, same workflow but it asks for a manual click to deploy.

Collapse
patricnox profile image
PatricNox • Edited

Before everything we always do a PR Review.

Thereafter:

The most laravel projects:

  • Bitbucket pipeline for code check
  • Laravel forge for provision
  • Envoyer for deploying

Other projects (Drupal, WP) and/or tiny ones:

  • Bitbucket pipeline for code check
  • Bitbucket pipeline, deploy on merge to master
Collapse
nombrekeff profile image
Keff

At our company we have 2 separated environments (PRE/PROD), we have a couple of different pipelines:

For PRE we do CI/CD with GH Actions, anything we push to master (if it passes the tests) we build a docker image, then the action calls a webhook wich tells Portainer that it should update the container with the newly built image.

For PROD it's quite similar, the only difference is we only deploy when a Release is published (we don't like CI/CD for production).

We previously used Jenkins, and before that, we uploaded with SSH (does the job :P).

Collapse
pavelloz profile image
Paweł Kowalski • Edited

Github push ->
Jenkins CI Build docker images ->
Terraform (or CloudFormation, depends on stack) apply if changed infra ->
New stack spin up ->
Switch LB to new stack when new code is green ->
Cleanup old stack

Collapse
chathula profile image
Chathula Sampath • Edited

We use jenkins. Everytime someone create a branch and push something jenkins run all the tests and build that branch independently. and if everything is okay. We create a artifact as gzip and upload it to aws S3.

Then everytime someone want to check the changes on that branch, we have a separate page in jenkins that we can say which environment you need to deploy specific branch. You can select a environment by a dropdown and need to type banch name. Then it will download the changes from S3 and extract it and copy changes with rsync. Then it will restart the process. Within 5-10 seconds. You are ready! We only allow prod to deploy master branch! Within this approch we can test any branch at any test env. So you don't want to have staging, dev etc branches other than the master!

Collapse
devhead profile image
dev-head

At work we're doing either of the following:

  • Git -> TravisCI -> CodeDeployment (non docker services)
  • Git -> TravisCI-> ECS Deployment (docker services)
  • SSH -> (of course; some services still need that hand touch)

It's up to project owner to define release strategy, usually they use environment specific deployment branches and version tag for pushing an official production release.

Collapse
darkes profile image
Victor Darkes

On-prem GitLab for the building of artifacts and Dockerizing of the application for the CI portion. No true CD atm. Currently it’s a manual canary strategy followed by manual feature testing against the canary and then a rolling deployment once verified.

Collapse
jizavala profile image
jizavala

Well just compile your vb.net, yes you read right vb.net, and use beyond compare to copy the dlls and executables to the servers. If you have to do something with databases u need to wait for an specific day and time to request a database change, if is urgent which is pretty everyday the only person whom could give you his blessing is the it director. To my bad luck I dont have that sense of urgency that everyone talks.

Cheers!!

Collapse
aleccool213 profile image
Alec Brunelle

Push docker images to Docker Hub on merge to master, then docker-compose up on the server.

Collapse
pranay_rauthu profile image
pranay rauthu

Uploading war files in jboss servers. We transitioned to do the same via jenkins server.

Collapse
peiche profile image
Paul

For my day job, we upload manually to AWS.

For my freelance gigs, I have continuous deployment set up with GitHub and Netlify.

Collapse
barelyhuman profile image
Reaper

Gitlab CI/CD at my “Financial” Support Org.

Github Actions at BarelyHuman

Collapse
tcelestino profile image
Tiago Celestino

We’re using Jenkins to CI/CD and to publish Docker containers in our Swarm repository. Projects in Node.js, my team started to use Github Actions to CI/CD and publish on npm.

Collapse
omar16100 profile image
omar shabab

For the current project, working with cloud functions primarily. Bitbucket pipelines + GCP

Collapse
corentinbettiol profile image
Corentin Bettiol • Edited

We just push from devenv to master repo, then pull to staging env, and then pull to prod.

Collapse
highcenburg profile image
Vicente G. Reyes

Just the netlify cli for static sites and heroku cli for dynamic sites.

Collapse
bhavaniravi profile image
Bhavani Ravi

For the current project I am building

docker build
docker push
k ... replicas=0
k ... replica=1

For the one that's in production we have CI/CD with cronos and ansible scripts

Collapse
lehmannsystems profile image
Mike

We're big fans of DeployBot. We use Digital Ocean and Bitbucket so it makes it really simple to deploy from BB into DO.

Collapse
codingmindfully profile image
Daragh Byrne

Code gets attached to a tennis ball then thrown into a hockey rink and we hope the players hit it the right direction (sometimes it's felt like that!)

Collapse
sagartyagi121 profile image
Gajender Tyagi

Done both manual objects deployment and CD process.

Collapse
luisgmoreno profile image
Luigui Moreno

SSH in the server and run “git pull origin master”