What is the process to get code into prod?
For further actions, you may consider blocking this person and/or reporting abuse
What is the process to get code into prod?
For further actions, you may consider blocking this person and/or reporting abuse
Atulpriya Sharma -
Michael Mekuleyi -
Vivesh -
Aurรฉlie Vache -
Top comments (72)
Honestly - it's just FTP & manual database pushes ๐คทโโ๏ธ
It's not sophisticated or fancy, but it works.
Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.
That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).
I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.
I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.
It does, itโs called pipelines I think. Itโs pretty descent.
As far as I can tell pipelined is only available on bitbucket cloud, and not the self hosted version (bitbucket server) ? I'd love to be wrong though.
Ah ok, I don't know more about that.
No shame in not using โfancyโ CI tools. Whatever does the job.
Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.
I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.
Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.
If you are not using a version control service, then yes, you need to be ashamed.
I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...
AWS CodePipeline + AWS CodeDeploy + AWS CodeBuild
Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.
What stack? I have run into issues using NextJS with this deployment approach. TIA
Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.
I would love to get to this point with my job.
The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.
I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.
We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.
I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.
AWX should meet your needs , it s basically Tower for free and integrates with your existing ansible roles
github.com/ansible/awx
We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.
Our standard deployment procedure is
git push
on a particular branch (usuallymaster
) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.
Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.
It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.
For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish ๐)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).
This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments ๐, but they are still trying set up the bolts and nuts for the whole operation.
So, you have an operations team which is named devops?
I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.
It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.
There's more than one application which we serve at my company.
The first application uses a dated deployment, which goes like this:
There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.
The second application uses a rolling deployment, which goes like this:
There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.
To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:
A simple process:
I use release-it
github.com/release-it/release-it
Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.
After wards in the
after:git:release
hook of release-it I have a set of commands that does the following:This is quite practical as I just have to run
release-it
in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.We still have a staging server as well for all ongoing testing.
Wow! I like your GitHub dark mode, can you share the extension, please!
This is just the Github Desktop app:
desktop.github.com/
you can use chrome dark reader extension
At the day job we have several projects that are deployed independently using BuildKite.
For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.
For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.
AWS + BuildKite Pipeline ( for Uploading, building and deployment)
How has your experience been with BuildKite? Do you like it?
I like it alot, easy to use and set-up.
For our web app, I would merge changes into master, pull the changes into my local and use
rsync
to sync between the files in my local and the files that are in our staging server. After testing, I would sync between the files in my local and our production server.It works well enough, but it's annoying when I have to deploy a quick fix and there are changes in staging that are not yet tested or ready for production. When that happens, I'd revert that MR and pull again, only if it's an MR with a lot of changes. Otherwise, I do it manually on production but be sure to create an MR for it that is merged and pushed to staging so that the next time I deploy to production the fix doesn't get lost.
I really want to change this deployment process because I don't have a lot of trust in it, so hopefully when I have some time.