Cover Photo by Yan from Pexels
// Detect dark theme
var iframe = document.getElementById('tweet-1308108094157787136-107');
if (document.b...
For further actions, you may consider blocking this person and/or reporting abuse
Sweet Jesus, of course it's a sales pitch.
Not yet :) we still don't have anything to sell. This is just my opinion based spending a couple of decades in IT. Cloud and then containers were tectonic shifts but now it's time for the next stage.
Hence the clickbait title.
For me, the big "hell no" to CD was when my boss received a phone call from a customer who was screaming and crying because we added 1 button to the UI for a requested feature. This change was the last straw for this particular user, and after rolling back the change, our customer agents collected user feedback and we got a very important backhand across the face from reality...
We used to practice 2 week deployments with a roadmap to get onto a full CI/CD workflow so that, even with feature flags, we could roll out changes quickly. After we almost lost our largest client over a button, we pulled back to a quarterly deployment compromise since many of our customers were adamant that we should do less updates, as few as once a year in some cases. What we took away from this event was end users DO NOT WANT CHANGES; when they have their own work to do, they don't have time to learn new features every week, day, or god forbid every hour. This is not a matter of fear, it's a matter of compassion for your end-users' time.
Another thing to consider is regulatory compliance. In some industries (like healthcare in the US) you have to certify your software, and major "feature" modifications trigger a significant and costly recertification process. Adding new features more than a few times a year could drive small businesses out of business with the $20k and up fee per recertification.
IMO this a product/leadership horror story, not necessarily a technology horror story. The issue is that the right feature wasn't built – or it was built in a way that required new processes from the customer. When that happens, the methodology and timeline of its release isn't the cause of the failure – it would have always been poorly received. There is a missing 'product' role here – a person or team that is in constant conversation with the customer and ultimately responsible for features make it into the product.
You're right on about compliance. So a CD "culture" or structure isn't a drop-in solution for every business. Some industries simply should not be releasing new ui changes or features all the time.
CD isn't always about features, though. Sometimes it's about performance, security or technical debt. In fact, a trusted CD process is a potential solution to this type of "bad feature" issue, allowing for fast backpedaling.
When talking about features, it is totally true that customers don't want change. They don't even want the "product", what they want is what your product enables them to achieve. They're hiring your product to get their job done. Compassion includes being on that journey with them when scenarios change.
An example: your customer's industry has a new legal regulation that requires them to change how they work (or let's say, a global pandemic occurs and changes everything 😉 ). In this scenario, compassion for the customer means anticipating their needs, and releasing changes as quickly and confidently as possible – as the situation evolves. This responsiveness is what CD enables.
"lost a client over a button" is surely a scary story! but wouldn't a better approach be making sure each client only gets the changes they need? CD isn't necessarily about new features. It's also about fixing bugs, improving performance and continuously paying off our tech debt. It's also about being able to roll things back quickly when the shit hits the fan. Once we're able to do this - we'll have our client's trust and won't fear losing them over a button.
This confirms my belief that CD is driven almost entirely by parochial MIS departments wanting to jump onto the latest devops craze or improve their own processes. I would be interested to know if anybody has seen quantifiable real world benefits experienced by uses outside of the IT department.
Definitely the key to continuous delivery. Devs avoid deploys when they are difficult or risky; deploys are risky when:
Instead of taking steps to make mistakes no/low impact; gatekeeping steps are layered atop each other to ensure no mistakes are made, simultaneously ensuring that any mistakes that are missed stay for weeks to months (to years) waiting for a fix to make it through the same gatekeepers.
Beautifully put! Lack of usable technology or engineering expertise is compensated for by broken culture. That's the situation we are out to fix.
This is a good article, but I'm a little confused by this central claim. It seems surprising that someone would claim to have a CI/CD pipeline if they only have CI. It seems like a difficult mistake to make, like if I said that I basically have a car when I actually just have a bicycle. They're very different things.
Could you perhaps elaborate on what you consider to be CD that other organizations don't recognize? Or could you give an example of something that someone thought was CI/CD that wasn't?
My main idea is that we tend to conflate CI with CD. Folks start with CI naively believing that with time - as they build it out - the same pipeline will take them to CD land. But then they hit the wall of uncertainty and stop the pipeline at the "staging" environment. So when you ask them, they say "we have CD, but we're not deploying to production because reasons" - and that's denial of course.
This makes sense, thanks!
This is through the prism of a backend dev right?
Because on the frontend, we are so many to use CI/CD now, with tools like Netlify, Vercel, Render, Surge... We have deploy previews for each PR, and it deploys to production on merge.
Afaik some backend colleagues using Heroku also have this kind of workflow with Review Apps.
But despite that setup, humans are still afraid to be. responsible for production problems. This just moves the fear to merging a PR, and we generally have a human reviewing the deploy previews.
Yes, definitely - things look brighter in the FE world. There's still that issue with syncing between the previews of FE and BE. And - can you release your previews gradually to a small percentage of your customers with Netilfy and the bunch? Asking because I don't know.
For rollout strategies, seen this Netlify product recently talking about phased rollout: netlify.com/products/edge/
I used to have 2 deployments for my startup: one on the dev branch, one on master. We had some users (including ourselves) using the dev branch by default, so we can notice early if something is wrong (as we use our own product).
Technology change fast. We, as humans, have difficulties to change.
That's the only answer I have for too many questions, including why many companies out there don't do CD, why they don't try to really create their own agile environment adapted to their culture (even if they think they do because they use scrum or whatever process / tooling), why the interview process in tech is often a joke full of whiteboard (I mean, who code on a computer?) with Google forbidden, of course.
That was a really good read! Thanks for that.
thank you Matthieu! but we as humans are also capable of much more if we create an environment that supports it.
I think when we're talking about this, it's really useful to distinguish /deployment/, which is getting code onto production, from /release/, which is the business decision about what users see and when they see it. Breaking those up does a lot to de-risk CD, because it's not immediately visible to users. Then you have time to test in production and find all the places things can or will go wrong. Or, well, most of them.
We all should keep in mind, that "production" often means "isolated environment in the field, anywhere in the world, with small embedded systems which might not even be capable of being deployed automatically"
This means, in the real world, true feedback loops are often impossible to implement. To blame the developers that are only using CI in those cases, because that is the most they are able to do, is not the correct thing to do.
The judgement may apply for those fancy, well-connected developers from the hipster web development bubble where true CD can be established.
But there are an awful lot of embedded systems programmers out there who already struggled with manual deployment processes for years and certainly will through the next decades.
For me and my team "only CI" has been a dramatic improvement in code quality and automation of nasty build processes which take hours to complete and took a lot of time before we had a working CI environment.
So I consider "only CI" a great thing overall which should not be blamed that harsh.
A great point! Edge and embedded deployments are definitely a largely untackled challenge. But even that is changing today. Look for example at what Zededa zededa.com/ are building.
Most of my past bosses be like "What the f**k is this CI/CD you're talking about?" Why would I spend good money on this DevOps BS when I already have code monkeys at my disposal?
My heart goes out to you. Never, ever let your boss treat you like a code monkey! I was there - it sucks!
No worries, actually most of them was O.k... just a couple that was horrible and I was out of there very fast, so... Still, most didn't wanna hear about automation or paid expert consultents...
I think the terminology in unfortunately overloaded.
gitlab.com/jessephillips/blog/-/bl...
Industry has shoved everything into CI and then reserved CD for production. Realistically there is CI, something in the middle, then CD.
Organizations have done CI for a long time, but then they want to do "something in the middle". It is a hard sales push, "let's implement CI!"... "umm, didn't we do that last year?" "yes, but we just do this every year"
I think it is because to realize there is ambiguity in where the lines exist and ask for clarification.
yes, ambiguity definitely leads to misinterpretation. but my point is that CD in cloud native world is a totally different concern - not just an extension of our pipeline. CI or "something in the middle" can all be implemented by a basic workflow automation tool. CD requires smarter, domain-specific algorithms and strategies.
I was first! 😂😂😂
And you won't be the last :)) But we're out to change that!
A great post BTW! We definitely see things with the same perspective.
Thanks 🙇♂️
Yes. I believe that logic-lead people sharing the same experience usually end up coming to the same conclusion
Well... Experience (read "age") has taught me it's hard to fight against a whole hype-driven industry. I'm happy to debunk it and then move on
Unless the industry has reached its tipping point... Which is exactly what we believe is happening.
CI is a big factor toward CD pipelines, hence importance of DevOps. Trunk based CI where you can package up versionable artifacts that feed into CD pipelines, vs feature branches with human negoiated releases. The later would require multiple CD pipelines for multiple artifacts from multiple branches. When you add monorepo, it gets more complex, as you'll need to extract multiple artifacts all on different CD pipelines for each CI pipeline for each branch for each artifact produced from the monorepo, and so this requires build tool/env that has a DAG to converge proper changes into discreet artifacts.
Many implementations are not up to the state to even perceive this, especially from Jenkins community (or other CIs that use the same model), where often have CI is intermixed with deployments, that often yields a chaotic unsustainable mess, with CI pipelines that do deployments without atomic CD pipelines.