<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alessandro Diaferia</title>
    <description>The latest articles on DEV Community by Alessandro Diaferia (@alediaferia).</description>
    <link>https://dev.to/alediaferia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alediaferia"/>
    <language>en</language>
    <item>
      <title>Presumed technical debt: how to recognise it and avoid it</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Thu, 26 Oct 2023 15:01:43 +0000</pubDate>
      <link>https://dev.to/alediaferia/presumed-technical-debt-how-to-recognise-it-and-avoid-it-3cm0</link>
      <guid>https://dev.to/alediaferia/presumed-technical-debt-how-to-recognise-it-and-avoid-it-3cm0</guid>
      <description>&lt;p&gt;The programming community unanimously considers technical debt an aspect of our work to keep under control and reduce.&lt;/p&gt;

&lt;p&gt;Personally, I’ve been vocal about the perils of technical debt in one of my early &lt;a href="https://alediaferia.com/2018/02/14/technical-debt-kills-your-company/"&gt;blog posts&lt;/a&gt; about the organizational issues it can cause.&lt;/p&gt;

&lt;p&gt;While I still stand by the majority of what I described in that article, I want to clarify my take on what I call the &lt;em&gt;presumed technical debt.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is technical debt?
&lt;/h2&gt;

&lt;p&gt;First of all, though, let’s agree on what the term means. Technical debt is a broad term that encompasses many aspects of building and delivering software.&lt;/p&gt;

&lt;p&gt;Martin Fowler refers to it as the &lt;em&gt;&lt;a href="https://martinfowler.com/bliki/TechnicalDebt.html"&gt;cruft&lt;/a&gt;&lt;/em&gt; that builds up over time and that makes it harder to modify and extend the system further. He then references the likely original author of the metaphor &lt;em&gt;Technical Debt&lt;/em&gt; in the person of Ward Cunningham.&lt;/p&gt;

&lt;p&gt;What Ward Cunningham wrote first about &lt;strong&gt;technical debt&lt;/strong&gt; , as follows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as &lt;a href="https://en.wikipedia.org/wiki/Interest"&gt;interest&lt;/a&gt; on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, &lt;a href="https://en.wikipedia.org/wiki/Object-oriented_programming"&gt;object-oriented&lt;/a&gt; or otherwise.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;Ward Cunningham, 1992&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here I interpret &lt;em&gt;not-quite-right&lt;/em&gt; code as a way of referring to the design of code based on &lt;strong&gt;assumptions and understanding before it gets actually validated by the end users&lt;/strong&gt;. But I’ll clarify this further later.&lt;/p&gt;

&lt;p&gt;Finally, I would add my own definition of what &lt;em&gt;technical debt&lt;/em&gt; is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Technical debt is the &lt;strong&gt;gap&lt;/strong&gt; between the &lt;strong&gt;current state&lt;/strong&gt; of the software non-functional characteristics and the &lt;strong&gt;desired state&lt;/strong&gt; they should be in, provided you had infinite time, money and absolute understanding of the problem at hand that your software is meant to solve.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Different forms of technical debt
&lt;/h2&gt;

&lt;p&gt;There is a substantial &lt;em&gt;natural&lt;/em&gt; cruft that builds up in software because of the different forces that affect it: dependencies getting stale, requirements or user needs changing, more code being added, unused code paths, and so on.&lt;/p&gt;

&lt;p&gt;This type of &lt;strong&gt;natural technical debt&lt;/strong&gt; should be continuously monitored and kept under control. You won’t ever be able to get rid of it completely but there should be a continuous tension to keep it under control.&lt;/p&gt;

&lt;p&gt;In my experience, though, there is an additional form of &lt;em&gt;technical debt&lt;/em&gt; that is less tangible, while representing a considerable influencing factor when developers design software: the &lt;em&gt;presumed technical debt&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Presumed technical debt&lt;/strong&gt; is a form of debt that developers think they will incur into soon if certain non-functional characteristics do not get implemented. Let’s explore together this form of technical debt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-negotiable characteristics
&lt;/h3&gt;

&lt;p&gt;First of all I want to clarify that there are surely certain &lt;strong&gt;baseline&lt;/strong&gt;  &lt;strong&gt;non-negotiable&lt;/strong&gt;  &lt;strong&gt;characteristics&lt;/strong&gt; that the organization might require before any code gets into production. These characteristics are in the form of, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;observability and operational envelopes&lt;/li&gt;
&lt;li&gt;security and compliance requirements&lt;/li&gt;
&lt;li&gt;data retention and governance constraints&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But there are other characteristics, like scalability, availability or resilience, that depend heavily on the specific use you are building to solve for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical debt or just lack of aspirational characteristics?
&lt;/h3&gt;

&lt;p&gt;On the contrary, in my experience, it’s not unusual for developers to design new functionality targeting &lt;strong&gt;infinite scalability&lt;/strong&gt; , &lt;strong&gt;eternal availability&lt;/strong&gt; , &lt;strong&gt;absolute resilience&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At times it feels that those characteristics are the default starting point. Developers aim at those but then compromise back as they face business constraints like time and money.&lt;/p&gt;

&lt;p&gt;The MVP ends up not being shipped on a container-based cloud infrastructure that auto-scales. Turns out, the same could be delivered via a static HTML page hosted on S3 and served via CloudFront.&lt;/p&gt;

&lt;p&gt;Nevertheless, many teams will still aim at container-based cloud infrastructure as the next iteration for what they shipped. They add that aspiration to their backlog, regardless of the feedback they receive from the MVP.&lt;/p&gt;

&lt;p&gt;It’s not unusual to think that what they have shipped is not &lt;em&gt;fit-for-success.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When success comes, it’ll bring troubles to this architecture&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;Anonymous developer&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having compromised on some ideal characteristics, some developers might not feel happy about the MVP. They might think it won’t certainly scale &lt;em&gt;when success brings troubles&lt;/em&gt;. They will consider the current architecture as technical debt, necessarily, and aim for getting rid of it.&lt;/p&gt;

&lt;p&gt;I would argue, instead, that those characteristics are &lt;strong&gt;aspirational&lt;/strong&gt; and &lt;strong&gt;need to be validated&lt;/strong&gt;. I’ll tell you more: their lack thereof might never turn into debt, effectively. This is especially true when working on new functionality.&lt;/p&gt;

&lt;p&gt;This is why I call it Presumed Technical Debt. It’s coming from the &lt;strong&gt;non-validated expectation&lt;/strong&gt; that the software will definitely need those non-functional characteristics to deliver value effectively.&lt;/p&gt;

&lt;p&gt;In this case the technical debt is represented by the gap between the current state of the non-functional characteristics and the ideal version of them as desired by the developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-validated assumptions are a form of technical debt
&lt;/h2&gt;

&lt;p&gt;Say the Product Organization has identified a new customer problem to solve. In an agile context you would work with your team to identify the minimum viable iteration that your customer could consume so that you can, to name a few:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;validate your understanding of the problem&lt;/li&gt;
&lt;li&gt;understand how your customer would use your software to solve the problem&lt;/li&gt;
&lt;li&gt;understand what is critical minimum functionality for your customer&lt;/li&gt;
&lt;li&gt;understand traffic patterns, auditing requirements, scalability needs, and so on&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You are making a bet on what you think is the right way of solving a problem and you want to validate this assumption early.&lt;/p&gt;

&lt;p&gt;You have an understanding based on your experience and ideas and you need to get something out there quickly before you go down a path that might take you far away from the real customer needs.&lt;/p&gt;

&lt;p&gt;Similarly, you might have assumptions regarding what the non-functional characteristic of your software should look like: does it really needs &lt;em&gt;&lt;a href="https://uptime.is/three-nines"&gt;three nines&lt;/a&gt;&lt;/em&gt; of minimum availability? The reality might be that no one is expecting any traffic during weekends so an 80% SLA is more than enough!&lt;/p&gt;

&lt;p&gt;There is &lt;a href="https://www.infoq.com/articles/agility-architecture/"&gt;a great article&lt;/a&gt; Kurt Bittner and Pierre Pureur about continuously balancing a Minimum Viable Architecture with the Minimum Viable Product.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Investing in more than is needed to support the necessary level of success of the MVP would be wasteful since that level of investment (i.e. solving those problems) may never be needed; if the MVP fails, no more investment is needed in the MVA.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;Kurt Bittner and Pierre Pureur – &lt;em&gt;Agility and Architecture: Balancing Minimum Viable Product and Minimum Viable Architecture&lt;/em&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Building expensive non-functional characteristics is a form of debt that you might have to carry throughout the whole lifecycle of the software you are building.&lt;/p&gt;

&lt;p&gt;If you start with &lt;em&gt;infinite&lt;/em&gt; horizontal scalability you are deliberately adding cost, overhead and constraints to keep up with this non-functional characteristic that you haven’t validated yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Don’t be DRY at all costs
&lt;/h2&gt;

&lt;p&gt;Another common fallacy I see is the application of the DRY principle as an end goal. DRY stands for Don’t Repeat Yourself and it is often the reason why incredibly &lt;em&gt;complicated abstractions&lt;/em&gt; exist.&lt;/p&gt;

&lt;p&gt;I attribute many erroneous implementation of the DRY principle to what I call Presumed Technical Debt.&lt;/p&gt;

&lt;p&gt;It’s not rare for developers to deem two use cases similar enough that they require a common abstraction implementation. In my experience it’s unsurprisingly frequent to end up with an abstraction that hardly solves either use cases.&lt;/p&gt;

&lt;p&gt;Often, developers build the abstraction out of the conviction that repeated code is a form of technical debt to avoid.&lt;/p&gt;

&lt;p&gt;In my experience the reality is that by the time you come up with the idea that two use cases might be similar enough to require a common abstraction, &lt;strong&gt;you actually don’t have enough data points to be able to make that decision&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Therefore, developing an abstraction out of a non-validated idea regarding commonality among two or more use cases results in an artificial constraint. Such constraint will impact your pace and your ability to iterate swiftly as you receive feedback signals from what you have put in the hands of the customers.&lt;/p&gt;

&lt;p&gt;Only then you will actually have data to be able to decide whether you need an abstraction or not. Maybe you need a third use case first!&lt;/p&gt;

&lt;p&gt;There is a great &lt;a href="https://overreacted.io/the-wet-codebase/"&gt;talk&lt;/a&gt; by Dan Abramov regarding The WET Codebase that highlights the dangers of always striving for abstracting out code too early.&lt;/p&gt;

&lt;h2&gt;
  
  
  Presumed technical debt makes your job frustrating
&lt;/h2&gt;

&lt;p&gt;Building software chasing an aspirational architecture is a recipe for frustration.&lt;/p&gt;

&lt;p&gt;This is why one should not use the term “technical debt” lightly. Labeling the lack of an arbitrarily ideal set of non-functional characteristics as technical debt will always create tension towards achieving an ideal architecture, regardless of whether it is necessary or not.&lt;/p&gt;

&lt;p&gt;In my experience the reality is different: most of the times the functionality you have built will need far-from-ideal non-functional characteristics: as developers, sometimes we hope we are &lt;strong&gt;going to be given the chance to work on complex architectures&lt;/strong&gt; , so we start with that upfront.&lt;/p&gt;

&lt;p&gt;A truly agile approach would leave many of the &lt;em&gt;made up&lt;/em&gt; non-functional requirements to be identified later, as the functionality starts getting used. Will it actually need to scale that much or is it rather delivering great value to my users while being interacted with sporadically? Will my users ever notice the system becoming unavailable at weekends?&lt;/p&gt;

&lt;p&gt;As professionals it is our job to &lt;em&gt;build for sustainability&lt;/em&gt;. If we design arbitrary, non-validated, non-functional characteristics we are actually building technical debt into our software unnecessarily. An overly complex architecture that doesn’t serve any valuable purpose for the end user might just add development overhead and slow down your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;In this post I have highlighted what I find being an emerging trend in software engineering: designing non-functional characteristics upfront, based on non-validated assumptions, hoping to be preparing software for success and avoiding technical debt.&lt;/p&gt;

&lt;p&gt;In reality, non-validated non-functional characteristics can turn into technical debt if they don’t end up supporting the real needs of the customers.&lt;/p&gt;

&lt;p&gt;While it is important to acknowledge the existence and impact of technical debt, it is equally crucial to differentiate between necessary technical debt and idealized technical debt.&lt;/p&gt;

&lt;p&gt;If baseline non-negotiable characteristics such as observability, security, and compliance requirements must be met, it is essential to validate and prioritize other characteristics such as scalability, availability, and resilience based on customer needs and feedback.&lt;/p&gt;

&lt;p&gt;Developers often fall into the trap of aiming for an idealized architecture before fully understanding the problem and user requirements. This leads to the development of code and architectures that may not align with the actual needs of the software. Making assumptions and designing for non-validated technical requirements can introduce unnecessary complexity and hinder the agility and speed of the development process.&lt;/p&gt;

&lt;p&gt;Rather than striving for an abstract ideal, it is important to approach software development with a mindset of sustainability and continuous reassessment. This means continuously monitoring and addressing technical debt that naturally accumulates over time, while avoiding the introduction of additional debt through arbitrary and non-validated technical requirements.&lt;/p&gt;




&lt;p&gt;I hope you enjoyed this post. If you like my writing you can follow me on &lt;a href="https://linkedin.com/in/alediaferia"&gt;LinkedIn&lt;/a&gt; to stay up-to-date on my thoughts on software engineering.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@martijnbaudoin?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash"&gt;Martijn Baudoin&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/audio-mixer-set-4h0HqC3K4-c?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://alediaferia.com/2023/10/26/presumed-technical-debt-how-to-recognise-it-and-avoid-it/"&gt;Presumed technical debt: how to recognise it and avoid it&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com"&gt;Alessandro Diaferia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>agile</category>
      <category>development</category>
      <category>engineering</category>
    </item>
    <item>
      <title>How to communicate efficiently as software engineers</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Mon, 25 Sep 2023 06:27:28 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-to-communicate-efficiently-as-software-engineers-511d</link>
      <guid>https://dev.to/alediaferia/how-to-communicate-efficiently-as-software-engineers-511d</guid>
      <description>&lt;p&gt;Reading Time: 5 minutes&lt;/p&gt;

&lt;p&gt;Conversation is the basis of collaboration. And yet, communication for software engineers is often such an overlooked soft skill. As software engineers we understand that different programming languages serve different purposes to describe to machines what we want to build. At the same time, different audiences and contexts call for different natural language subsets.&lt;/p&gt;

&lt;p&gt;I have observed many times &lt;strong&gt;inefficient communication&lt;/strong&gt; happen between the people building the software and the ones consuming it in some form or another.&lt;/p&gt;

&lt;p&gt;Most of the times the source of the inefficiency comes from the inability to understand what each other are looking for: for example engineers explaining the technical challenges they are facing while a customer service representative might only looking for a delivery date.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different audiences call for different languages
&lt;/h2&gt;

&lt;p&gt;When talking to other people we most likely use a specific subset of our dictionary to explain our thoughts, to make our message clear and to maximise the chances that our interlocutor understands what we are saying.&lt;/p&gt;

&lt;p&gt;The dictionary we use when we speak to our fellow engineers is quite specific and optimised to get technical concepts over. There is probably a huge amount of technical details that we give for granted and that come from, to name a few, the inherent technical expertise we believe our colleagues have, the specific technical context we are building within, recent conversations, the agreed technological direction of our organization and so on.&lt;/p&gt;

&lt;p&gt;From time to time, though, we are required to interact with stakeholders from different departments or external to the company altogether. In this case the conversation will need to move to a non-technical level.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does my audience even care?
&lt;/h2&gt;

&lt;p&gt;Building software products that are actually useful requires software engineers to spend time with collaborators, users and colleagues, share perspectives and exchange feedback.&lt;/p&gt;

&lt;p&gt;Collaboration happening with different functions will require different problems to be discussed. To each its own preferred language.&lt;/p&gt;

&lt;p&gt;A product manager will be interested in your complexity assessment, but will not care about what Node module you are going to end up depending on to deliver the functionality.&lt;/p&gt;

&lt;p&gt;Getting deep into the details of the implementation you already have in mind will also bring the conversation down to a level that is not helpful yet and with this audience. I have discussed this issue in the past when I talked about the &lt;a href="https://dev.to/alediaferia/business-outcome-language-an-introduction-for-software-engineers-12h6-temp-slug-2771361"&gt;business outcome language&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Similarly, if you are explaining to a customer service representative why the software is behaving the way it is, going into the details of your current architecture will hardly help.&lt;/p&gt;

&lt;p&gt;As technologists we often miss the opportunity to communicate efficiently because we tend to pollute the conversation with unhelpful technical details. It is most likely due to our &lt;em&gt;forma mentis&lt;/em&gt; and the fact that we spend the majority of our time at the technical level.&lt;/p&gt;

&lt;p&gt;If we are explaining why we’re not hitting the originally forecasted date to a team of sales reps, what’s the point in telling them that you need to migrate your current &lt;em&gt;webpack version to the latest one but breaking changes are slowing you down?&lt;/em&gt; &lt;strong&gt;This level of detail is not helpful nor actionable&lt;/strong&gt; for them. What is the new delivery date you have confidence in? This is all it matters in this conversation.&lt;/p&gt;

&lt;p&gt;When what you’re saying does not make sense to your audience you are contributing to making the communication inefficient. I’ve seen unnecessary technical details pollute the conversation, making it even harder for messages to get across. At times, an answer filled with irrelevant technical details results in the interlocutor shutting down completely, unsatisfied with the answer, and unwilling to ask again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But how can you make the communication more efficient?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Know who you are communicating with
&lt;/h2&gt;

&lt;p&gt;In my opinion, the first step towards efficient communication is having clear who your audience is. What is their day job, what do they care about? How does what you do impact them? What are they currently struggling with and how is your work going to make their life easier? What can they do about what you are saying?&lt;/p&gt;

&lt;p&gt;If you keep these questions in mind when interacting with the your audience it’s going to be easier to refrain from including details or digress into irrelevant explanations.&lt;/p&gt;

&lt;p&gt;I know I framed this issue as a dichotomy between technical and non-technical audiences so far, but clear communication might also be lacking within the same department.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bonus: you might think you know your audience
&lt;/h3&gt;

&lt;p&gt;Even your CTO might be interested in a level of detail that is different from what you normally share with the rest of your team.&lt;/p&gt;

&lt;p&gt;Say you are having troubles upgrading to a newer version of a library because it’s introducing backwards incompatible changes. You might feel it would be easy to get down to the technical level to explain why your team is late (again?). After all you are talking to a C &lt;strong&gt;T&lt;/strong&gt; O. They must get it.&lt;/p&gt;

&lt;p&gt;I’m sure they do get it but that’s not what they are interested in. If your team is facing unforeseen challenges all is interesting to them is what your team is going to do to overcome them. Have you worked on an alternative plan? Do you have confidence on a different date you are going to be able to deliver the change?&lt;/p&gt;

&lt;h2&gt;
  
  
  Take the time to prepare for an efficient conversation
&lt;/h2&gt;

&lt;p&gt;Communicating outside the comfort zone of the technical level might require a bit of preparation.&lt;/p&gt;

&lt;p&gt;Take the time to prepare. If there is a meeting coming up where you are going to share about the progress of your team focus on what outcome the people in the meeting are going to look for.&lt;/p&gt;

&lt;p&gt;Try to understand if you will have an answer for them. Sometimes we default to a technical explanation when we haven’t prepared a more useful and focused answer.&lt;/p&gt;

&lt;p&gt;Reflecting on this &lt;strong&gt;ahead&lt;/strong&gt; of the meeting also gives you a better chance to find the answer. As a leader, do not feel you must know the answer. Your job is actually to lead your team to find the answer rather than finding one on your own.&lt;/p&gt;

&lt;p&gt;Practice makes perfect. Take the time to think about who your audience is and start practising what you are going to say. Reflect as to whether what you are saying is valuable for your audience or not and if not try again. Try to focus on the details that matter to your audience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focus on what’s actionable
&lt;/h2&gt;

&lt;p&gt;Another important aspect for efficient cross-functional communication, in my experience, is how actionable what you are saying is.&lt;/p&gt;

&lt;p&gt;For example, it’s not unusual for software teams to have to communicate a date is slipping (I won’t get into the debate on deadlines here, but if you are interested I got into it in &lt;a href="https://alediaferia.com/2023/02/14/embracing-genuine-deadlines-as-software-engineers/"&gt;this other post&lt;/a&gt;): how are you communicating this?&lt;/p&gt;

&lt;p&gt;My recommendation is to focus on what’s actionable: are you communicating this to the customer success department? Well, then maybe they don’t care as to &lt;em&gt;why&lt;/em&gt; the date is slipping but rather on how they are going to manage this communication with the customer. Therefore, focus on giving a new time frame your team has confidence in.&lt;/p&gt;

&lt;p&gt;Focusing on what’s actionable is also a good forcing function to come prepared to the conversation. Don’t just communicate you are late, make the effort to adjust the original estimation and give more actionable information to your audience.&lt;/p&gt;

&lt;p&gt;Reflect on how you communicate: are you conveying the information that the audience is looking for? Are you making it easy for them to extract the important bits or are you including distracting details?&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;Communication is an often overlooked aspect of the work of software engineers but, in my opinion, it’s the most important one to achieve efficient and high performing organisations. As individuals, working on perfecting our communication skills is a great investment to maximise the effectiveness of the teams we are part of.&lt;/p&gt;

&lt;p&gt;Efficient communication enables teams to iterate more quickly, raise issues soon and course-correct fast because information flows swiftly if the people are deliberately taking care of the right level of detail in the conversation.&lt;/p&gt;

&lt;p&gt;Take time to invest in this skill by preparing the conversation ahead, be mindful of your audience and deliberate about the information you are going to share.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://alediaferia.com/2023/09/25/how-to-communicate-efficiently-as-software-engineers/"&gt;How to communicate efficiently as software engineers&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com"&gt;Alessandro Diaferia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>agile</category>
      <category>communication</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Yes, TDD slows you down</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Wed, 26 Aug 2020 13:03:33 +0000</pubDate>
      <link>https://dev.to/alediaferia/yes-tdd-slows-you-down-ngd</link>
      <guid>https://dev.to/alediaferia/yes-tdd-slows-you-down-ngd</guid>
      <description>&lt;p&gt;I recently had the opportunity to reflect about testing practices. In particular, the different layers of testing: unit testing, integration testing, acceptance testing.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://martinfowler.com/bliki/TestPyramid.html"&gt;test pyramid&lt;/a&gt; suggests that each level has an associated cost. The cost of your tests translates then into the pace at which you can make changes to your system.&lt;/p&gt;

&lt;p&gt;Quite frequently, early-stage organisations tend to focus less on unit tests and more on &lt;a href="http://softwaretestingfundamentals.com/acceptance-testing/"&gt;acceptance testing&lt;/a&gt;. This might be the case at the early days of a new product. There’s a high fluctuation in requirements and you might feel that unit tests will slow you down when it comes to making radical changes to the implementation.&lt;/p&gt;

&lt;p&gt;By not investing in unit testing, though, you are giving up on the opportunity to do TDD. Yes, you will still be able to write acceptance tests before writing any code. But having so coarse-grained tests will not constrain you in the way you structure your code, your components and how they communicate with each other.&lt;/p&gt;

&lt;p&gt;Your acceptance tests will guarantee that your system meets the requirements, &lt;strong&gt;now&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  TDD is not just about testing
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Test-driven_development"&gt;TDD&lt;/a&gt; is a software development methodology that strictly dictates the order you do things in. And it’s not dictating that order for the sake of it. Writing tests first and letting that drive the functional code you write after is going to impact the way you &lt;a href="https://stackoverflow.com/questions/704855/software-design-vs-software-architecture"&gt;design and architect&lt;/a&gt; the system.&lt;/p&gt;

&lt;p&gt;This is going to have numerous benefits: your code will be modular, loosely coupled and with high cohesion, as well as being clearly documented and easy to extend.&lt;/p&gt;

&lt;p&gt;Unit testing is one aspect of TDD. It helps you at a fundamental level of your system. The implementation level, that is ultimately going to decide how agile you will be in the future at making changes.&lt;/p&gt;

&lt;p&gt;If you develop a system without a TDD approach you won’t necessarily have a system without tests. But you will most likely have a system that is hard to extend and hard to understand.&lt;/p&gt;

&lt;p&gt;This is the reason why I’m more and more convinced that you can’t really be that liberal about your testing layers. If you’re building new functionality and decide it is going to solely be tested through acceptance testing, you’re most likely going to miss out on the added benefits of TDD at your service layer.&lt;/p&gt;

&lt;p&gt;When you mix TDD with &lt;em&gt;non-TDD&lt;/em&gt; you’re not just compromising on the tests at certain layers of your test stack. You’re compromising on the architecture of your system.&lt;/p&gt;

&lt;h1&gt;
  
  
  Do you really need to go faster?
&lt;/h1&gt;

&lt;p&gt;Of course you do! Who thinks that going &lt;em&gt;slower&lt;/em&gt; is better?&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--rRt4BnHt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/1097981531275096069/VxGsVDoq_normal.jpg" alt="☕ J. B. Rainsberger profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        ☕ J. B. Rainsberger
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @jbrains
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Worried that TDD will slow down your programmers? Don't. They probably need slowing down.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      17:23 PM - 08 Feb 2012
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=167297606698008576" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=167297606698008576" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      1437
      &lt;a href="https://twitter.com/intent/like?tweet_id=167297606698008576" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      1002
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;p&gt;Too bad the question should usually be rephrased to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Do you really need to go faster now, and go slower later?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I think this way of phrasing it helps making the right decision. As I said at the beginning of this post, going faster now might actually be the right decision. TDD might be what slows you down now, and you might not be able to afford slowing down now. But my advice is not to stop asking that question as you keep making compromises.&lt;/p&gt;

&lt;p&gt;When you defer the adoption of TDD you’re not just skipping on unit tests. You’re developing code unconstrained, increasing the opportunities of high coupling and low cohesion. This inevitably makes you slower as you go. And it will keep making you slower until you will inevitably incur in such a high cost of change that implementing even the most trivial feature will seem like an insurmountable challenge. Not to mention &lt;a href="https://alediaferia.com/2018/02/14/technical-debt-kills-your-company/"&gt;this will impact your team morale, as well as the confidence that the rest of the organisation will have for your team&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Be conscious
&lt;/h1&gt;

&lt;p&gt;As I said at the beginning of this post, giving up on unit tests might actually be the right thing to do for you, now. What’s important to realise, though, is that giving up on TDD at a certain layer doesn’t just mean giving up on tests. It means giving up on a framework that guides you through specific principles. If you are not going to do TDD, do you have an alternative set of principles to stick to? Do you have an alternative way that helps you structure your system effectively, to help you scale? Are you going to have the confidence to make changes? Are new team members going to be able to easily on-board themselves and make changes without requiring additional contextual knowledge?&lt;/p&gt;




&lt;p&gt;If you enjoyed this post, chances are you’ll &lt;a href="https://twitter.com/alediaferia"&gt;enjoy what I post on Twitter&lt;/a&gt;. Thank you for getting to the end of this article.&lt;/p&gt;

</description>
      <category>tdd</category>
      <category>testing</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>How to keep your Amazon MQ queues clean</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Thu, 30 Jul 2020 19:56:42 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-to-keep-your-amazon-mq-queues-clean-22k2</link>
      <guid>https://dev.to/alediaferia/how-to-keep-your-amazon-mq-queues-clean-22k2</guid>
      <description>&lt;p&gt;Amazon MQ queues might fill up if you use them in your tests but don’t take care of cleaning them up. Let’s explore together a way of addressing this issue.&lt;/p&gt;

&lt;p&gt;I was hoping to avoid writing dedicated code to just consume all the messages enqueued during tests so I started looking around for some tool I could integrate in my &lt;em&gt;Continuous Integration&lt;/em&gt; pipeline.&lt;/p&gt;

&lt;p&gt;I found &lt;a href="https://github.com/antonwierenga/amazonmq-cli"&gt;amazonmq-cli&lt;/a&gt;. I have to say, it’s not the most straightforward tool when used in a &lt;em&gt;CI&lt;/em&gt; pipeline. When it comes to command line options it’s not very flexible and it favours the use of &lt;em&gt;command files&lt;/em&gt; to read from. It also only allows for file-based configuration.&lt;/p&gt;

&lt;p&gt;Nevertheless, I downloaded and configured it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring amazonmq-cli in your CI pipeline
&lt;/h2&gt;

&lt;p&gt;Amazon MQ does not support JMX but if your broker does, you could give &lt;a href="https://github.com/antonwierenga/activemq-cli"&gt;activemq-cli&lt;/a&gt; a try.&lt;/p&gt;

&lt;p&gt;For the above reason, this tool needs Web Console access, so make sure that’s configured.&lt;/p&gt;

&lt;p&gt;Since I’m running this command as part of a clean-up job in my CI pipeline, I have to automate its configuration.&lt;/p&gt;

&lt;p&gt;We need to produce two files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the configuration file to be able to access the broker (and the web console)&lt;/li&gt;
&lt;li&gt;and the list of commands to execute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you download and extract the tool you’ll see its directory structure is as follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ tree -L 1
.
├── bin
├── conf
├── lib
└── output

4 directories, 0 files
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There’s not much opportunity for customization, so we will have to produce the relevant configuration file and put it in the &lt;code&gt;conf&lt;/code&gt; folder (where you can find a sample one).&lt;/p&gt;

&lt;p&gt;First thing, then, is to create the broker configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"broker {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; aws-broker {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; web-console = &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$ACTIVEMQ&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;WEB&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;CONSOLE&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;URL&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt; amqurl = &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$ACTIVEMQ&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;BROKER&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;URL&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt; username = &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$ACTIVE&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;MQ&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;USER&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt; password = &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$ACTIVE&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;MQ&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;PASSWORD&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt; prompt-color = &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;light-blue&lt;/span&gt;&lt;span class="se"&gt;\"\n&lt;/span&gt;&lt;span class="s2"&gt; }&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s2"&gt;web-console {&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; pause = 100&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt; timeout = 300000&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /home/ciuser/amazonmq-cli/conf/amazonmq-cli.config
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can see, I’ve also included a &lt;code&gt;web-console&lt;/code&gt; part that is expected by the tool when performing certain commands like the &lt;code&gt;purge-all-queue&lt;/code&gt; one that I needed in this case.&lt;/p&gt;

&lt;p&gt;Now, let’s configure the list of commands to run.&lt;/p&gt;

&lt;p&gt;In my case all the queues generated in the CI environment during the test share the same prefix. This makes it easier for me to purge them all or delete them later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"connect --broker aws-broker&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;purge-all-queues --force --filter &lt;/span&gt;&lt;span class="nv"&gt;$MY&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;SPECIAL&lt;/span&gt;&lt;span class="se"&gt;\_&lt;/span&gt;&lt;span class="s2"&gt;PREFIX&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; purge-commands.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Drain them!
&lt;/h2&gt;

&lt;p&gt;Finally, we can perform the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight shell"&gt;&lt;code&gt;/home/ciuser/amazonmq-cli/bin/amazonmq-cli &lt;span class="nt"&gt;--cmdfile&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;/purge-commands.txt
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;That’s it! Let me know your experiences with cleaning up queues on Amazon MQ!&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://alediaferia.com/2020/07/30/how-to-keep-your-amazon-mq-queues-clean/"&gt;How to keep your Amazon MQ queues clean&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com"&gt;Alessandro Diaferia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>programming</category>
      <category>amazon</category>
      <category>bash</category>
    </item>
    <item>
      <title>The Mythical DevOps Engineer</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Mon, 27 Jul 2020 13:50:04 +0000</pubDate>
      <link>https://dev.to/alediaferia/the-mythical-devops-engineer-n-611</link>
      <guid>https://dev.to/alediaferia/the-mythical-devops-engineer-n-611</guid>
      <description>&lt;p&gt;I'm always a little suspicious of job specs looking for the so-called &lt;em&gt;DevOps Engineer&lt;/em&gt; role. Most of the times, they mention a variety of duties and responsibilities that make you wonder: "Are they hiring for a single role or a whole team?".&lt;/p&gt;

&lt;p&gt;If you look around, it's easy to see that roles containing the term &lt;em&gt;DevOps&lt;/em&gt; don't share the same meaning across different companies. Often, though, they stress the importance of being able to cover for what traditionally would have been the specialization of different people.&lt;/p&gt;

&lt;p&gt;Don't get me wrong: cross-functional expertise is definitely important. But I don't think &lt;em&gt;DevOps&lt;/em&gt; means replacing a multitude of specialization with a single role. Different specializations like Operations, Security, Testing, Development, Product Management and so on, are vast and require specific knowledge.&lt;/p&gt;

&lt;p&gt;This is why I think the key differentiator of successful &lt;em&gt;DevOps-enabled&lt;/em&gt; companies is that they enable them to collaborate effectively, having clear in mind that the goal is to deliver value to the end user.&lt;/p&gt;

&lt;p&gt;Overall, I don't think we should be talking about a &lt;em&gt;DevOps Engineer&lt;/em&gt;, but rather about &lt;em&gt;DevOps&lt;/em&gt; culture in organizations.&lt;/p&gt;

&lt;p&gt;But let's take a step back first.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does DevOps mean, really?
&lt;/h2&gt;

&lt;p&gt;My personal take on the term is as follows.&lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1279460637341560833-104" src="https://platform.twitter.com/embed/Tweet.html?id=1279460637341560833"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1279460637341560833-104');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1279460637341560833&amp;amp;theme=dark"
  }



&lt;/p&gt;

&lt;p&gt;What I mean by this is that in a &lt;em&gt;DevOps&lt;/em&gt; organization the different specialities are incentivised to collaborate. The intrinsic existing tension between Dev making changes to the system and Ops wanting to keep the system stable dissolves in favour of a greater good: the value stream.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A stable system that delivers nothing is of no use to the company the same way an unstable system that keeps offering new functionality is of no use to the user.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The greater good, in this case, becomes the flow of value between the organization and its users. Dev and Ops are incentivised to work together to maximise this flow, understanding which bets worked out and which ones didn't. Being able to easily activate and deactivate functionality, resolve issues and, more generally, &lt;strong&gt;adapt and evolve&lt;/strong&gt;, is what really makes adopting a &lt;em&gt;DevOps&lt;/em&gt; mindset worth it.&lt;/p&gt;

&lt;p&gt;Overall, I think there shouldn't be a single &lt;em&gt;DevOps&lt;/em&gt; role but, rather, a set of specific specialities collaborating effectively.&lt;/p&gt;

&lt;p&gt;This ideal view of the terminology might sometimes clash with the reality of the job market. Companies willing to attract the best talent with the most current skills may end up advertising for roles that are counterproductive in the context of DevOps principles.&lt;/p&gt;

&lt;p&gt;But let's have a look at a few interesting job specs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fjordan-whitfield-sm3Ub_IJKQg-unsplash-1536x863.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fjordan-whitfield-sm3Ub_IJKQg-unsplash-1536x863.jpg" alt="work harder neon sign photo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@whitfieldjordan?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Jordan Whitfield&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/job-search?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are companies looking for?
&lt;/h2&gt;

&lt;p&gt;Let's read through a few excerpts from job specs I found out there in the wild.&lt;/p&gt;

&lt;h3&gt;
  
  
  The flexible problem solver
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;[...] Devops Engineers are IT professionals who collaborate with software developers, system operators and other IT staff members to manage code releases. They cross and merge the barriers that exist between software development, testing and operations teams and keep existing networks in mind as they design, plan and test. Responsible for multitasking and dealing with multiple urgent situations at a time, Devops Engineers must be extremely flexible. [...]&lt;/p&gt;

&lt;p&gt;A job spec on the internet&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is one of those classic examples where the organization believes that the DevOps principles should be delegated to a single team.&lt;/p&gt;

&lt;p&gt;The spec mentions the myriad of duties that are responsibily of the &lt;em&gt;Devops Engineers&lt;/em&gt; in the company. A &lt;em&gt;Devops Engineer&lt;/em&gt; is expected to &lt;em&gt;"multi-task and deal with multiple urgent situations at a time"&lt;/em&gt;. Therefore, they &lt;em&gt;"must be extremely flexible"&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Multitasking&lt;/em&gt; and dealing with multiple urgent situations at a time is, for sure, likely to happen anywhere: I don't think this should be a peculiarity of a role in an organization. I think the aspiration should be that every engineer in the team is enabled to handle urgent situations and learn from them so that the organization can improve and mitigate possibilities of service disruption.&lt;/p&gt;

&lt;p&gt;Urgent situations shouldn't be the norm but, rather, the organization should empower its people to be able to adapt to and learn from such situations so that they can be kept an exception.&lt;/p&gt;

&lt;p&gt;The way this role is advertised leads me to think that there is no effort to really adopt &lt;em&gt;DevOps&lt;/em&gt; practices: teams are not really incentivised to collaborate and improve but rather the expectation is that there is a dedicated team to throw issues and urgent situations at. This job spec would be a big red flag for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  The productivity booster
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;A DevOps Engineer combines an understanding of both engineering and coding. A DevOps Engineer works with various departments to create and develop systems within a company. From creating and implementing software systems to analysing data to improve existing ones, a DevOps Engineer increases productivity in the workplace.&lt;/p&gt;

&lt;p&gt;Another job spec on the internet&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In a &lt;em&gt;DevOps-_enabled organization engineers _do&lt;/em&gt; work with various departments. But what's the point then of having a dedicated &lt;em&gt;DevOps Engineer&lt;/em&gt; role? Do the other type of engineers not work with the various departments of the organization? Do non-DevOps Engineers not analyse data and improve existing systems? Additionally, the job spec claims that &lt;em&gt;a DevOps Engineer increases productivity in the workplace&lt;/em&gt;. How? Does it radiate productivity?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F78p2ielj7w2yabchvxcl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F78p2ielj7w2yabchvxcl.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Release Manager... but &lt;em&gt;DevOps&lt;/em&gt;!
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;A DevOps Engineer works with developers and the IT staff to oversee the code releases. [...] Ultimately, you will execute and automate operational processes fast, accurately and securely.&lt;/p&gt;

&lt;p&gt;My favourite so far&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is quite a condensed one but what strikes me as interesting is the release aspect mentioned in it.&lt;/p&gt;

&lt;p&gt;It is quite a complex aspect of the DevOps culture in my opinion. I tend to separate the concept of &lt;em&gt;deployment&lt;/em&gt; from the one of &lt;em&gt;release&lt;/em&gt;. Product updates as experienced by the user are governed by a release policy that may or may not be the same as the deployment policy. This really depends on the strategy of the organization.&lt;/p&gt;

&lt;p&gt;Regardless of this distinction, though, I believe that constraining the capability of delivering value to the end user to a specific role undermines the agility of an organization.&lt;/p&gt;

&lt;p&gt;The teams should be able to continuously release code into production. The release of functionality should be controlled through mechanisms such as &lt;em&gt;&lt;a href="https://martinfowler.com/articles/feature-toggles.html" rel="noopener noreferrer"&gt;feature flags&lt;/a&gt;&lt;/em&gt; so that the code that reaches production does not necessarily &lt;em&gt;activate&lt;/em&gt;. This makes it possible for the organization to control when the functionality actually reaches the user.&lt;/p&gt;

&lt;p&gt;In general, a deployment should be a non-event: nothing special, just another merge into the main branch that causes code to end up in production. Moreover, releasing the actual functionality to the user should not require a dedicated engineer to be performed: the relevant stakeholders in the company, usually departments other than engineering, should be able to enable the functionality to the user, perform experiments and autonomously decide when it is appropriate to &lt;em&gt;release&lt;/em&gt; new functionality.&lt;/p&gt;

&lt;p&gt;Job specs like this one feel like they're trying to repurpose the role of the &lt;em&gt;Release Manager&lt;/em&gt; to keep up with the latest trends by just changing a few words.&lt;/p&gt;

&lt;p&gt;I don't think release management goes away in a &lt;em&gt;DevOps&lt;/em&gt;-enabled organization. What changes is that the concept of &lt;em&gt;deployment&lt;/em&gt; gets decoupled from the one of &lt;em&gt;release&lt;/em&gt;. This is done to enhance the agility of the engineering organization and reduce the risk that's intrinsic of the changes reaching production environments.&lt;/p&gt;

&lt;p&gt;At the same time, the relevant technological changes are implemented so that &lt;em&gt;releasing&lt;/em&gt; new functionality to the users can become a non-event, in the hands of the strategic initiatives of the organizations.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Platform Engineer. But &lt;em&gt;cooler!&lt;/em&gt;
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;The DevOps Engineer will be a key leader in shaping processes and tools that enable cross-functional collaboration and drive CI/CD transformation. The DevOps Engineer will work closely with product owners, developers, and external development teams to build and configure a high performing, scalable, cloud-based platform that can be leveraged by other product teams.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This job spec is one of those that I consider the least bad. It describes a set of responsibilities that usually pertain to a Platform or Infrastructure Team. Most of these teams often get renamed to DevOps Team and their members become DevOps Engineers for &lt;em&gt;fashion&lt;/em&gt; reasons.&lt;/p&gt;

&lt;p&gt;The Platform Engineering team becomes the key enabler for organizations that want to embrace the DevOps principles. But thinking that such principles will only be embraced by that team will hardly result in a successful journey.&lt;/p&gt;

&lt;p&gt;The Platform Engineering team will surely be responsible to build the relevant infrastructure that enables the other teams to build on top but they can't be left alone in the understanding and application of those principles.&lt;/p&gt;

&lt;p&gt;Developer teams will need to become autonomous in adopting and making changes to those systems; they will need to understand the implications of their code running in production; understand how to recognize if the system is not behaving as expected and be able to react to it.&lt;/p&gt;

&lt;p&gt;At the same time, even the product team should spend time understanding what new important capabilities derive from successfully adopting DevOps practices. Code continuously flowing into production behind feature flags, containerization technologies, improved monitoring and alerting, and so on, open endless opportunities of improved user experience and experimentation that should be leveraged to maximise the company competitiveness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fmatteo-vistocco-Dph00R2SwFo-unsplash-scaled.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Fmatteo-vistocco-Dph00R2SwFo-unsplash-scaled.jpg" alt="people riding boat on body of water photo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@mrsunflower94?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Matteo Vistocco&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/collaboration?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What should companies be looking for?
&lt;/h2&gt;

&lt;p&gt;We've just gone through a few job specs that look for variations of a &lt;em&gt;DevOps Engineer&lt;/em&gt; role and I've outlined what aspects I think are flawed in those roles. But what should companies look for, then?&lt;/p&gt;

&lt;p&gt;Before blindly starting to hire for roles driven by industry fashion trends, organizations should rather invest in understanding what's holding them back from being &lt;em&gt;DevOps&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://itrevolution.com/the-unicorn-project/" rel="noopener noreferrer"&gt;Unicorn Project&lt;/a&gt;, &lt;a href="https://twitter.com/RealGeneKim" rel="noopener noreferrer"&gt;Gene Kim&lt;/a&gt; mentions the &lt;em&gt;Five Ideals&lt;/em&gt; of successful DevOps organizations. I think they're an effective set of principles to take the temperature of your organization in terms of DevOps practices. Those ideals are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locality and Simplicity&lt;/li&gt;
&lt;li&gt;Focus, Flow and Joy&lt;/li&gt;
&lt;li&gt;Improvement of Daily Work&lt;/li&gt;
&lt;li&gt;Psychological Safety&lt;/li&gt;
&lt;li&gt;Customer Focus&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Locality and Simplicity
&lt;/h3&gt;

&lt;p&gt;Making changes to the system, in order to deliver greater value to the end user, should be easy: easy in terms of team's autonomy to make changes to an area of the product as well as being easy in terms of friction that the technology in use imposes on the changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Focus, Flow and Joy
&lt;/h3&gt;

&lt;p&gt;Developers should be able to focus on their work and be able to make software with minimum impediments. This is facilitated by making sure that the software development lifecycle infrastructure is working for the benefit of the engineering organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improvement of Daily Work
&lt;/h3&gt;

&lt;p&gt;Continuously learning and improving the conditions in which the work gets done is the key to maximise the flow of value and the happiness of the people doing the work. Successful organizations facilitate a continuously improving environment by enabling engineers to build tools and practices that improve their daily operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Psychological Safety
&lt;/h3&gt;

&lt;p&gt;An organization will hardly be able to improve if the people that are part of it are not incentivised to raise issues and discuss them. This is not something you solve for by hiring a specific role. It the organization's responsibility to facilitate an environment where constructive feedback is the norm.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customer Focus
&lt;/h3&gt;

&lt;p&gt;Last but not least, the engineering organization, just like any other department in the company, should be sharply focused on the customer. All the efforts should be balanced against what's best for the customer and, ultimately, for the company.&lt;/p&gt;




&lt;p&gt;What should companies be looking for then? I think the priority should be on understanding what's blocking the organization from fully embracing a DevOps mindset. Once that's established, the expertise needed to get there is probably going to be easier to identify.&lt;/p&gt;

&lt;p&gt;The most important aspect for me, though, is understanding the importance of specialities. Every role will have incredible value to add to the journey towards DevOps. What's fundamental is understanding the importance of collaboration between the different roles. The organization will have to put the relevant practice changes in place to facilitate collaboration. Specific DevOps knowledge in terms of technology, tools and best practices, will be required, for sure, but it won't be something a single role should be responsible of.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Froi-dimor-70lKY2pk3yo-unsplash-1-1536x1024.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F07%2Froi-dimor-70lKY2pk3yo-unsplash-1-1536x1024.jpg" alt="nude man statue photo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@roi_dimor?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Roi Dimor&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/myth?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A mythical role
&lt;/h2&gt;

&lt;p&gt;It feels like the DevOps Engineer is a mythical figure that certain organizations pursue in the hope of finding the holy grail of a Software Engineer capable of doing anything.&lt;/p&gt;

&lt;p&gt;This, of course, will hardly be the case. Recognizing the importance of the single specializations is what makes organization successful and capable of maximising the expertise of the people that they are made of.&lt;/p&gt;

&lt;p&gt;What happens in a DevOps organization is that responsibilities are redistributed: developers are empowered to make changes to production environments because organizations recognize the importance of moving fast. This means opportunities for success increase together with the opportunities of failure.&lt;/p&gt;

&lt;p&gt;Eliminating barriers and creating a safe space for collaboration helps Devs and Ops work together to resolve issues when they occur. This is what ultimately leads to high performing teams that are incentivised to follow the North Star of the continuous value stream to the end user.&lt;/p&gt;

&lt;p&gt;Instead of pursuing a mythical role then, let's go after the much more plausible alternative of creating a well oiled machine where all the people are incentivised to work together in harmony with the clear goal of maximising the value to the end user.&lt;/p&gt;




&lt;p&gt;Thanks for getting to the end of this article. I sincerely hope you've enjoyed it. &lt;strong&gt;&lt;a href="https://twitter.com/alediaferia" rel="noopener noreferrer"&gt;Follow me on Twitter&lt;/a&gt;&lt;/strong&gt; if you want to stay up-to-date with all my articles and the software I work on.&lt;/p&gt;

&lt;p&gt;Cover photo by &lt;a href="https://unsplash.com/@rhii?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Rhii Photography&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/mythical?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>career</category>
      <category>culture</category>
    </item>
    <item>
      <title>How I enabled CORS for any API on my Single Page App</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Tue, 07 Jul 2020 17:14:08 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-i-enabled-cors-for-any-api-on-my-single-page-app-n0b</link>
      <guid>https://dev.to/alediaferia/how-i-enabled-cors-for-any-api-on-my-single-page-app-n0b</guid>
      <description>&lt;p&gt;In this blog post I’ll show you how I used free services available to anyone to build a little proxy server for my app to overcome certain &lt;em&gt;CORS&lt;/em&gt; limitations for my Single Page App.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://chisel.cloud"&gt;Chisel&lt;/a&gt; to help with some repetitive API responses composition and manipulation that I was doing at work.&lt;/p&gt;

&lt;p&gt;It’s a single page app that allows you to perform requests against any API endpoint and compose results to extract only what you need. &lt;a href="https://alediaferia.com/2020/05/08/how-used-chisel-pull-gitlab-pipelines-stats/"&gt;It also allows for CSV exports.&lt;/a&gt; Pretty straightforward.&lt;/p&gt;

&lt;p&gt;Being it still in its earliest days I decided that I wanted to build it with the simplest architecture in order for me to be able to iterate quickly. I went for &lt;a href="https://jamstack.org/best-practices/"&gt;the JAMstack&lt;/a&gt;, built it in React and deployed on Netlify.&lt;/p&gt;

&lt;p&gt;Since it doesn’t have a back-end server it talks to, anything you do stays on your machine. Unfortunately, not all APIs allow for cross-origin requests so, in certain cases, you won’t be able to perform any request from your browser unless you enable the &lt;em&gt;proxy&lt;/em&gt; functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h1vDUNVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-11.23.20-768x94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h1vDUNVx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-11.23.20-768x94.png" alt=""&gt;&lt;/a&gt;Proxy feature on chisel.cloud&lt;/p&gt;

&lt;p&gt;What happens if you don’t is that your browser will attempt a CORS preflight request which will fail if the API doesn’t respond with the expected headers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8MXCXx45--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-16.24.28-768x76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8MXCXx45--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-16.24.28-768x76.png" alt=""&gt;&lt;/a&gt;&lt;em&gt;CORS&lt;/em&gt; preflight request failure&lt;/p&gt;

&lt;h2&gt;
  
  
  What is &lt;em&gt;CORS&lt;/em&gt; and when is it a problem for your Single Page App?
&lt;/h2&gt;

&lt;p&gt;From the MDN documentation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Cross-Origin Resource Sharing&lt;/strong&gt; (&lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/CORS"&gt;CORS&lt;/a&gt;) is a mechanism that uses additional &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/HTTP"&gt;HTTP&lt;/a&gt; headers to tell browsers to give a web application running at one &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/origin"&gt;origin&lt;/a&gt;, access to selected resources from a different origin. A web application executes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, or port) from its own.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS"&gt;https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS&lt;/a&gt;&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, there are certain requests, called &lt;em&gt;Simple Requests&lt;/em&gt;, that don’t trigger CORS checks. Unfortunately, these type of requests are quite limited and don’t allow to pass certain headers like the &lt;code&gt;Authorization&lt;/code&gt; one (e.g. a basic-auth request). You can read more about these type of requests &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Simple_requests"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For this reason, we’re going to allow a good set of HTTP methods and headers to pass through our proxy and return back the response as unchanged as possible.&lt;/p&gt;

&lt;p&gt;The bulk of the work will be configuring the right set of &lt;code&gt;Access-Control-Allow-*&lt;/code&gt; headers to be returned back to the browser when CORS preflighted checks are performed. I recommend you have a look at the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS"&gt;MDN Documentation&lt;/a&gt; to learn more about CORS as it is quite comprehensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The proxy
&lt;/h2&gt;

&lt;p&gt;In order to allow any request to pass the CORS preflight checks I built a simple proxy server that returns the expected headers to the browser and passes through the requests to the destination server.&lt;/p&gt;

&lt;p&gt;You can find the source code for it on &lt;a href="https://github.com/chiselcloud/proxy"&gt;Github&lt;/a&gt;, but let’s go through the steps to build your own for free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up NGINX
&lt;/h2&gt;

&lt;p&gt;The proxy itself is a simple instance of NGINX configured with a server to allow for proxied request to a dynamic destination.&lt;/p&gt;

&lt;p&gt;In order to be able to run NGINX on Heroku we have to make some changes to run it as non-privileged user.&lt;/p&gt;

&lt;p&gt;We’re basically making sure that NGINX will try to write to unprivileged &lt;em&gt;writeable&lt;/em&gt; locations: this is because Heroku enforces that our container runs as non-root. You can read more about it &lt;a href="https://devcenter.heroku.com/articles/container-registry-and-runtime"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Accounting for any URL
&lt;/h3&gt;

&lt;p&gt;The second aspect of this configuration is actually defining our dynamic proxy: we will translate requests to any URL so that they will expose the right CORS information.&lt;/p&gt;

&lt;p&gt;The main complexity of the &lt;a href="https://chisel.cloud"&gt;Chisel&lt;/a&gt; case resides in the fact that we want to allow any URL to be proxied. This is because we won’t know in advance what URL the user will type in, of course.&lt;/p&gt;

&lt;p&gt;The way NGINX allows for setting up the proxy functionality is through the &lt;code&gt;proxy_pass&lt;/code&gt; directive:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Sets the protocol and address of a proxied server and an optional URI to which a location should be mapped. As a protocol, “&lt;code&gt;http&lt;/code&gt;” or “&lt;code&gt;https&lt;/code&gt;” can be specified.&lt;/p&gt;

&lt;p&gt;&lt;cite&gt;The NGINX documentation&lt;/cite&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In order to be able to specify the URL to pass to dynamically I decided to go with a custom header: &lt;code&gt;X-Chisel-Proxied-Url&lt;/code&gt;. This way &lt;code&gt;Chisel&lt;/code&gt; will use that header to tell the proxy which destination to proxy through to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;proxy_pass $http_x_chisel_proxied_url;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;$&lt;/code&gt; symbol in NGINX is used to reference variables and the HTTP headers get automatically converted to &lt;code&gt;$http_&lt;/code&gt; prefixed &lt;a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#var_http_"&gt;variables&lt;/a&gt; using the above syntax.&lt;/p&gt;

&lt;p&gt;There’s quite a bit of things to go through in this NGINX server configuration. Let’s start with the &lt;code&gt;location /&lt;/code&gt; block first.&lt;/p&gt;

&lt;p&gt;The first bit in there is the &lt;code&gt;if&lt;/code&gt; statement: it handles the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#Preflighted_requests"&gt;CORS preflighted requests&lt;/a&gt; case and it basically allows for a bunch of HTTP methods and headers by default. It restricts everything to the &lt;code&gt;https://chisel.cloud&lt;/code&gt; Origin, just because I don’t want my proxy to be used by other applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;proxy_redirect off&lt;/code&gt;: I disabled redirects for now. I’m still not sure how I’m going to handle them so I decided to turn them off until I can find a use case for them.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_set_header Host $proxy_host&lt;/code&gt;: this is simply forwarding the destination host as the &lt;code&gt;Host&lt;/code&gt; header. This is a requirement for valid HTTP requests through browsers. This value will be exactly the same as the one being set for &lt;code&gt;proxy_pass&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_set_header X-Real-IP $remote_addr&lt;/code&gt;: here we’re simply taking care of forwarding the client IP through to the destination.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_pass $http_x_chisel_proxied_url&lt;/code&gt;: this is the real important bit of the whole configuration. We’re taking the header coming in from the Chisel client application and setting it as the URL to pass through to. This is effectively making the dynamic proxy possible.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;proxy_hide_header 'access-control-allow-origin'&lt;/code&gt;: this, together with the following &lt;code&gt;add_header 'access-control-allow-origin' 'https://chisel.cloud'&lt;/code&gt; is basically making sure to override whatever &lt;code&gt;Access-Control-Allow-Origin&lt;/code&gt; header is coming back from the destination server with one that only allows requests from our Chisel application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, the top two directives.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;resolver&lt;/code&gt;: this is needed so that NGINX knows how to resolve the names of the upstream servers to proxy through to. In my case I picked a public free DNS. You can pick yours from &lt;a href="https://public-dns.info/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;listen $ __PORT__ $ default_server&lt;/code&gt;: this one, instead, is the directive that makes everything possible using Docker on Heroku. We will have a look at it later in this blog post, so keep reading!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building the container image
&lt;/h2&gt;

&lt;p&gt;As mentioned above, I’m going to use NGINX’s &lt;a href="https://hub.docker.com/_/nginx"&gt;base image&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Uto1ZD4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/carbon3-768x342.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Uto1ZD4p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/carbon3-768x342.png" alt=""&gt;&lt;/a&gt;Dockerfile&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; is pretty simple. We’re replacing the default &lt;code&gt;nginx.conf&lt;/code&gt; with our own to make sure that NGINX can run unprivileged. We’re also copying our proxy server configuration.&lt;/p&gt;

&lt;p&gt;As you can see I have named the file as &lt;code&gt;proxy.conf.tpl&lt;/code&gt;. I’ve done this to be explicit about the fact that the file is not ready to be used as is. We will have to dynamically edit the port it is going to listen on at runtime before starting NGINX.&lt;/p&gt;

&lt;p&gt;As clarified in the &lt;a href="https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime"&gt;documentation&lt;/a&gt;, Heroku expects the containers to be able to listen on the value specified within the &lt;code&gt;$PORT&lt;/code&gt; environment variable. The solution we’re using here, then, is making sure to replace the &lt;code&gt;$ __PORT__ $&lt;/code&gt; placeholder I have included in the configuration with the actual content of the &lt;code&gt;$PORT&lt;/code&gt; environment variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Heroku
&lt;/h2&gt;

&lt;p&gt;We’re almost there. Now we need to configure our application so that we can deploy our container straight from our repository.&lt;/p&gt;

&lt;p&gt;Create a &lt;a href="https://dashboard.heroku.com/new-app"&gt;&lt;em&gt;new lovely app&lt;/em&gt;&lt;/a&gt; on Heroku so that we can prepare it to work with containers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zS5Fj1yF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-11.54.10-768x473.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zS5Fj1yF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-11.54.10-768x473.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, let’s configure the app to work with container images. I haven’t found a way to do it through the dashboard so let’s go ahead with the command line.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1v12ncmC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/carbon2-768x293.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1v12ncmC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/carbon2-768x293.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now add a simple &lt;code&gt;heroku.yml&lt;/code&gt; file to your repository so that Heroku knows what to do to build the image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build: docker: web: Dockerfile
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Simple as that.&lt;/p&gt;

&lt;p&gt;Now, in the &lt;em&gt;Deploy&lt;/em&gt; tab of your application dashboard, make sure you connect your repository to the app: this way you’ll be able to deploy automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oDxKgfwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-15.45.06-768x111.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oDxKgfwc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-15.45.06-768x111.png" alt=""&gt;&lt;/a&gt;Heroku Deploy section – Github connection&lt;/p&gt;

&lt;p&gt;Your proxy is finally ready to go. Once you kick off the deploy you’ll be able to see it start up in the application logs as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5pkq7y8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-15.47.03-768x118.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5pkq7y8J--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/Screenshot-2020-07-05-at-15.47.03-768x118.png" alt=""&gt;&lt;/a&gt;Startup application logs&lt;/p&gt;

&lt;p&gt;As you can see, the process is being started using the command we have specified through the &lt;code&gt;CMD&lt;/code&gt; directive and the &lt;code&gt;PORT&lt;/code&gt; value is being injected by Heroku.&lt;/p&gt;

&lt;p&gt;With the proxy up you’ll now be able to forward your requests through the proxy. As mentioned above, you will need to use the custom &lt;code&gt;X-Chisel-Proxied-Url&lt;/code&gt; header (or whatever header you decide to configure for your proxy) to specify the original URL the user intended to hit.&lt;/p&gt;

&lt;p&gt;As you can see from the animated gif below, the proxy feature allows to overcome the CORS limitation when hitting the &lt;a href="https://date.nager.at/"&gt;Nager.Date&lt;/a&gt; API from Chisel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0cUvCuK7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/chisel-proxy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0cUvCuK7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/07/chisel-proxy.gif" alt=""&gt;&lt;/a&gt;The Chisel proxy in action&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We have just built a proxy server reusing open-source technology. This allows us to keep our Singe Page App separate from the server logic that’s needed to overcome the CORS limitations.&lt;/p&gt;

&lt;p&gt;In general, CORS is one of the security measures your browser employs to mitigate certain opportunity for hijacking your website to perform unintended activity. Even if we have just examined an opportunity for bypassing this limitation, always think twice about whether it is appropriate or not for your use case.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this quick walk-through to build your own free proxy server. Don’t forget to &lt;a href="https://twitter.com/alediaferia"&gt;&lt;strong&gt;follow me on Twitter&lt;/strong&gt;&lt;/a&gt; for more content like this.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://alediaferia.com/2020/07/07/how-built-dynamic-proxy-server-spa-cors/"&gt;This post&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com"&gt;Alessandro Diaferia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How I used GCP to create the transcripts for my Podcast</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Thu, 14 May 2020 16:07:54 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-i-used-gcp-to-create-the-transcripts-for-my-podcast-1pec</link>
      <guid>https://dev.to/alediaferia/how-i-used-gcp-to-create-the-transcripts-for-my-podcast-1pec</guid>
      <description>&lt;p&gt;I’m currently working on a series of episodes for a Podcast I’ll be publishing soon. The Podcast will be in Italian and I wanted to make sure to publish the episode transcripts together with the audio episodes.&lt;/p&gt;

&lt;p&gt;The idea of manually typing all the episodes text wasn’t really appealing to me so I started looking around.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the tools out there?
&lt;/h2&gt;

&lt;p&gt;From a quick Google search, it seems that some companies are offering a mix of automated and human-driven transcription services.&lt;/p&gt;

&lt;p&gt;I wasn’t really interested in that for now. I was, of course, just interested in consuming an API I could push my audio to and get back some text in a reasonable amount of time.&lt;/p&gt;

&lt;p&gt;For this reason, I started looking for &lt;em&gt;speech-to-text&lt;/em&gt; APIs and, of course, the usual suspects figured among the first results.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://azure.microsoft.com/en-us/services/cognitive-services/speech-to-text/" rel="noopener noreferrer"&gt;Microsoft Cognitive Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.ibm.com/cloud/watson-speech-to-text" rel="noopener noreferrer"&gt;IBM Watson Speech-to-Text&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.speechmatics.co" rel="noopener noreferrer"&gt;SpeechMatics&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/transcribe/" rel="noopener noreferrer"&gt;Amazon Transcribe&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/speech-to-text" rel="noopener noreferrer"&gt;Google Cloud Speech-to-Text&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To be quite honest, I didn’t spend too much time investigating the solutions above. I probably spent more time reading about them to write this blog post.&lt;/p&gt;

&lt;p&gt;I decided to go with Google Cloud because I’ve never used GCP before and wanted to give it a try. Additionally, the documentation for it seemed quite straightforward, as well as the support for Italian as language to transcribe from (the podcast is in Italian). I also had a few free credits available because I’ve never used GCP for personal use before.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up
&lt;/h2&gt;

&lt;p&gt;If you want to try transcribing your episodes too, follow this quick setup guide to get started.&lt;/p&gt;

&lt;p&gt;Head over to &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt; and set up an account. Make sure you create a project and enable the Speech-to-Text API. If you forget to do so &lt;code&gt;gcloud&lt;/code&gt; will be able to take care of that for you, later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-20.24.59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-20.24.59.png" alt="Google Cloud Speech-to-Text"&gt;&lt;/a&gt;Google Cloud Speech-to-Text&lt;/p&gt;

&lt;p&gt;Second thing I did was installing &lt;code&gt;gcloud&lt;/code&gt;, the CLI Google Cloud provides for interacting with the APIs. This time I was only interested in testing the API so it seemed to me that this tool was the only way to get started quickly.&lt;/p&gt;

&lt;p&gt;Additionally, there’s not much you can do from the Google Cloud Web Console if you want to deal with Speech-to-Text APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get your file ready for transcription
&lt;/h3&gt;

&lt;p&gt;Sampling rate for your audio file should be at least 16 kHz for better results. Additionally, GCP recommends a lossless codec. I only had an mp3 of my episode handy at the time so I gave it a try anyway and it worked well enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make sure you know the sample rate of your file, though, because specifying a wrong one might lead to poor results.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can usually verify the sample rate by getting info on your file from your Mac’s Finder:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-20.03.37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-20.03.37.png" alt="File's context menu"&gt;&lt;/a&gt;Just click Get Info on your file’s context menu&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-19.55.04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2FScreenshot-2020-05-13-at-19.55.04.png" alt="File's Finder Info section with sample rate"&gt;&lt;/a&gt;There’s the sample rate&lt;/p&gt;

&lt;p&gt;You can read more about the recommended settings on the &lt;a href="https://cloud.google.com/speech-to-text/docs/best-practices" rel="noopener noreferrer"&gt;Best Practices&lt;/a&gt; section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Upload your episode to the bucket
&lt;/h3&gt;

&lt;p&gt;GCP needs your file to be available from a Storage Bucket so, go ahead and &lt;a href="https://cloud.google.com/storage/docs/creating-buckets" rel="noopener noreferrer"&gt;create one&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud.google.com%2Fstorage%2Fimages%2Fcreate-bucket.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloud.google.com%2Fstorage%2Fimages%2Fcreate-bucket.png"&gt;&lt;/a&gt;Storage Bucket creation example&lt;/p&gt;

&lt;p&gt;You’ll be able to upload your episode from there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time to transcribe
&lt;/h2&gt;

&lt;p&gt;Once you have your episode file up there in the cloud go back to your local machine terminal were you have configured the &lt;code&gt;gcloud&lt;/code&gt; tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon1-1024x175.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon1-1024x175.png"&gt;&lt;/a&gt;Gcloud used to trigger the speech-to-text transcription&lt;/p&gt;

&lt;p&gt;If your episode lasts longer than 60 seconds (😬) you’ll want to use &lt;code&gt;recognize-long-running&lt;/code&gt; and most likely specify &lt;code&gt;--async&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As I said before, make sure you specify the right &lt;code&gt;--sample-rate&lt;/code&gt;: in my case 44100. This will help GCP transcribe your file with better results.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;--async&lt;/code&gt; switch creates a long-running asynchronous operation. It took around 5 minutes for me to have the operation complete.&lt;/p&gt;

&lt;p&gt;Oddly, I wasn’t able to find any reference to the asynchronous operation from my Google Cloud Console. So, if you want to be able to know what happened to your transcription job, make sure you take note of the operation identifier. You’ll need it to query the &lt;code&gt;speech operations&lt;/code&gt; API for information about your transcription job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon2-1024x268.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon2-1024x268.png"&gt;&lt;/a&gt;The speech operation metadata&lt;/p&gt;

&lt;h2&gt;
  
  
  The transcribed data
&lt;/h2&gt;

&lt;p&gt;Once your transcription operation is complete the &lt;code&gt;describe&lt;/code&gt; command will return the transcript excerpts, together with the confidence rate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon3-1024x641.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2020%2F05%2Fcarbon3-1024x641.png"&gt;&lt;/a&gt;The speech transcript excerpt&lt;/p&gt;

&lt;p&gt;I wasn’t particularly interested in the &lt;code&gt;confidence&lt;/code&gt; rate, I only wanted a big blob of text to be able to review and use for SEO purposes as well as to be able to include it with the episode. For this reason, &lt;strong&gt;&lt;em&gt;jq to the resque!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I love &lt;code&gt;jq&lt;/code&gt;, you can achieve so much with when it comes to manipulate JSON.&lt;/p&gt;

&lt;p&gt;In my case, I only wanted to concatenate all the &lt;code&gt;transcript&lt;/code&gt; fields and save them to a file. Here’s how I did:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ./bin/gcloud ml speech operations describe &amp;lt;your-transcription-operation-id&amp;gt; | jq '.response.results[].alternatives[].transcript' &amp;gt; my-transcript.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I thought of sharing the steps above because they’ve been useful to me in producing the transcripts. I think GCP Speech-to-Text works quite well with Italian but, of course, the transcript is not suitable to be used as it is, unless your accent is perfect. Mine wasn’t (😅).&lt;/p&gt;

&lt;p&gt;If you want to know more about my journey towards publishing my first podcast &lt;a href="https://twitter.com/alediaferia" rel="noopener noreferrer"&gt;&lt;strong&gt;follow me on Twitter&lt;/strong&gt;&lt;/a&gt; were I’ll be sharing more about it.&lt;/p&gt;




&lt;p&gt;This post appeared first on &lt;a href="https://alediaferia.com" rel="noopener noreferrer"&gt;Alessandro Diaferia's blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>podcast</category>
      <category>productivity</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Is Why more important than How?</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Tue, 12 May 2020 11:38:34 +0000</pubDate>
      <link>https://dev.to/alediaferia/why-how-baj</link>
      <guid>https://dev.to/alediaferia/why-how-baj</guid>
      <description>&lt;p&gt;As developers, we're often eager to start coding the solution. The problem is, sometimes, we start with the solution in mind, but we're not clear as to what the problem is.&lt;/p&gt;

&lt;p&gt;I often see myself wondering what's the cleanest way of implementing what I have in mind: but I soon realize I'm focusing so much on the implementation details that I've almost forgotten &lt;strong&gt;why&lt;/strong&gt; it is that I want to implement a certain functionality.&lt;/p&gt;

&lt;p&gt;When I focus too much on the implementation, the purpose of the functionality starts to become less concrete and I almost forget what the value I wanted to deliver to my users was.&lt;/p&gt;

&lt;p&gt;I've seen this happen in many situations. User stories specified starting with the &lt;em&gt;How&lt;/em&gt; rather than the &lt;em&gt;Why&lt;/em&gt;. They, deliberately or inadvertently, mandate a specific implementation. But, in doing so, I think they dramatically reduce the possibilities of really delivering something valuable to the user.&lt;/p&gt;

&lt;p&gt;Try start with the &lt;em&gt;Why&lt;/em&gt;, instead. Why do you want to build this feature? What is it that you want your users to be able to achieve? By doing this exercise I often find that the implementation that was originally suggested wasn't really the best way to help the user achieve the intended outcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What's your take on this &lt;em&gt;philosophical&lt;/em&gt; matter?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Also, if you want to hear more &lt;em&gt;philosophical&lt;/em&gt; questions from me, &lt;strong&gt;&lt;a href="https://alediaferia.com"&gt;follow me on Twitter&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>discuss</category>
      <category>practices</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How I used Chisel to pull stats on Gitlab pipelines</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Sat, 09 May 2020 13:44:53 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-i-used-chisel-to-pull-stats-on-gitlab-pipelines-2lph</link>
      <guid>https://dev.to/alediaferia/how-i-used-chisel-to-pull-stats-on-gitlab-pipelines-2lph</guid>
      <description>&lt;p&gt;I built &lt;a href="https://chisel.cloud"&gt;chisel.cloud&lt;/a&gt; in my spare time to automate something I did to derive insights about my Gitlab pipeline times.&lt;/p&gt;

&lt;p&gt;In this blog post I’m going to show you how I did it in the hope that it might be useful to you too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chisel.cloud"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s5cZT-Eh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/Screenshot-2020-05-04-at-15.24.59-1024x693.png" alt="chisel app main screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from the picture above, Chisel is still pretty early stage. I decided to publish it anyway because I’m curious to know whether something like this can be useful to you too or not.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding deployment time
&lt;/h2&gt;

&lt;p&gt;The goal of this exercise was for me to better understand the deployment time (from build to being live in production) of my project and have a data-driven approach as to what to do next.&lt;/p&gt;

&lt;p&gt;Since the project in question uses Gitlab CI/CD, I thought of taking advantage of its API to pull down this kind of information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gitlab Pipelines API
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://docs.gitlab.com/ee/api/pipelines.html#list-project-pipelines"&gt;Gitlab pipelines API&lt;/a&gt; is pretty straightforward but a few differences between the &lt;code&gt;/pipelines&lt;/code&gt; and the &lt;code&gt;/pipelines/:id&lt;/code&gt; APIs means that you have to do a little composition work to pull down interesting data.&lt;/p&gt;

&lt;p&gt;Here’s how I did it.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pull down your successful pipelines
&lt;/h3&gt;

&lt;p&gt;First thing I did was fetching the successful pipelines for my project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9g_NPnJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-1-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9g_NPnJx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-1-1024x768.png" alt="chisel app first screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, this API returns minimal information about each pipeline. What I needed to do next in order to understand pipeline times was to fetch further details for each pipeline.&lt;/p&gt;

&lt;h4&gt;
  
  
  Chisel – Transform
&lt;/h4&gt;

&lt;p&gt;Chisel provides a handy transformation tool that uses &lt;a href="https://jmespath.org/"&gt;JMESPath&lt;/a&gt; to help you manipulate the JSON returned by the API you are working with. I used it to extract the pipeline IDs from the returned response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RNGY7aAY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-2-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RNGY7aAY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-2-1024x768.png" alt="chisel app second screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Chisel shows you an live preview of your transformation. Something as simple as &lt;code&gt;[*].id&lt;/code&gt; is enough for now. The result is an array of pipeline IDs.&lt;/p&gt;

&lt;p&gt;Right after obtaining all the IDs I need I can apply another transformation to turn those IDs into pipeline objects with all the relevant information I need for my stats.&lt;/p&gt;

&lt;p&gt;Chisel has another kind of transformation type called &lt;strong&gt;Fetch&lt;/strong&gt; that helps you transform the selected values into the result of something fetched from a URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ce_D6FQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-3-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ce_D6FQD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-3-1024x768.png" alt="chisel app third screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In particular, you can use the &lt;code&gt;${1}&lt;/code&gt; placeholder to pass in the &lt;em&gt;mapped&lt;/em&gt; value. In my case, each ID is being mapped to the &lt;code&gt;/pipelines/${1}&lt;/code&gt; API.&lt;/p&gt;

&lt;p&gt;The result is pretty straightforward.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eIUg_JvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-4-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eIUg_JvG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-4-1024x768.png" alt="chisel app fourth screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Filter out what you don’t need
&lt;/h3&gt;

&lt;p&gt;As you can see, some of the returned pipelines have a &lt;code&gt;before_sha&lt;/code&gt;of value &lt;code&gt;0000000000000000000000000000000000000000&lt;/code&gt;. Those are pipelines triggered outside of merges into &lt;code&gt;master&lt;/code&gt; so I’m not interested in them.&lt;/p&gt;

&lt;p&gt;Filtering those out is as simple as &lt;code&gt;[?before_sha != '0000000000000000000000000000000000000000&lt;/code&gt;]&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cbSVgw4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-5-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cbSVgw4m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-5-1024x768.png" alt="chisel app fifth screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  The transformation history
&lt;/h4&gt;

&lt;p&gt;As you can see, on the right of the screen there’s a little widget that shows you the transformations you have applied. You can use it to go back and forth in the transformation history and rollback/reapply the modifications to your data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t2IYm_j_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/transform-history.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t2IYm_j_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/transform-history.png" alt="chisel app transformation history"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The last transformation
&lt;/h3&gt;

&lt;p&gt;The last transformation I need to be able to start pulling out useful information has to turn my output into a set of records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lmyQyfHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-6-1024x768.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lmyQyfHD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/chisel-6-1024x768.png" alt="chisel app sixth screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m selecting only a few fields and turning the result into an array of array. This is the right format to be able to export it as a CSV.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ofoHUMPD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/csv-export.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ofoHUMPD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/csv-export.png" alt="chiel app csv download screen"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Sheets
&lt;/h2&gt;

&lt;p&gt;Finally, I can upload my CSV export to &lt;a href="https://sheets.new"&gt;Google Sheets&lt;/a&gt; and plot the information I need.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EUchn0Nd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/gsheets-grab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EUchn0Nd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://alediaferia.com/wp-content/uploads/2020/05/gsheets-grab.png" alt="google sheets import"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Chisel&lt;/em&gt; is still at its earliest stage of development and it is pretty much tailored on my specific use case but if you see this tool can be useful to you too, please head to the &lt;a href="https://github.com/chiselcloud/chisel"&gt;Github&lt;/a&gt; repo and suggest the improvements you’d like to see.&lt;/p&gt;

&lt;p&gt;If you liked this post and want to know more about Chisel, &lt;strong&gt;&lt;a href="https://twitter.com/alediaferia"&gt;follow me on Twitter&lt;/a&gt;&lt;/strong&gt;!&lt;/p&gt;




&lt;p&gt;Featured image by &lt;a href="https://unsplash.com/@drscythe?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Dominik Scythe&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/chisel?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://alediaferia.com/2020/05/08/how-used-chisel-pull-gitlab-pipelines-stats/"&gt;How I used Chisel to pull Gitlab pipelines stats&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com"&gt;Alessandro Diaferia&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vWogaON8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/github-logo-28d89282e0daa1e2496205e2f218a44c755b0dd6536bbadf5ed5a44a7ca54716.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/chiselcloud"&gt;
        chiselcloud
      &lt;/a&gt; / &lt;a href="https://github.com/chiselcloud/chisel"&gt;
        chisel
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      The Chisel online app
    &lt;/h3&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>showdev</category>
      <category>javascript</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How do people package simple backend-less utility apps made in JavaScript these days? </title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Tue, 21 Apr 2020 13:33:04 +0000</pubDate>
      <link>https://dev.to/alediaferia/how-do-people-package-simple-backend-less-utility-apps-made-in-javascript-these-days-40el</link>
      <guid>https://dev.to/alediaferia/how-do-people-package-simple-backend-less-utility-apps-made-in-javascript-these-days-40el</guid>
      <description>&lt;p&gt;I'm considering releasing a small utility application that helps me pull and compose data from HTTP APIs. What's the best way these days to package such an app that doesn't require any backend but needs to perform network requests?&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
      <category>discuss</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Any good resources for distributed transactions?</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Tue, 07 Jan 2020 10:55:41 +0000</pubDate>
      <link>https://dev.to/alediaferia/any-good-resources-for-distributed-transactions-4839</link>
      <guid>https://dev.to/alediaferia/any-good-resources-for-distributed-transactions-4839</guid>
      <description>&lt;p&gt;I'm looking to read on distributed transaction best practices and tools. Any good resources out there?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>database</category>
    </item>
    <item>
      <title>Continuous deployment with GitLab, Docker and Heroku</title>
      <dc:creator>Alessandro Diaferia</dc:creator>
      <pubDate>Sun, 29 Dec 2019 18:07:24 +0000</pubDate>
      <link>https://dev.to/alediaferia/continuous-deployment-with-gitlab-docker-and-heroku-324j</link>
      <guid>https://dev.to/alediaferia/continuous-deployment-with-gitlab-docker-and-heroku-324j</guid>
      <description>&lt;p&gt;&lt;em&gt;Continuous Deployment&lt;/em&gt; refers to the capability of your organisation to produce and release software changes in short and frequent cycles.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2Fcd-graph.png"&gt;&lt;/a&gt;Pain vs Frequency relationship – &lt;a href="https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html" rel="noopener noreferrer"&gt;https://www.martinfowler.com/bliki/FrequencyReducesDifficulty.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the ideas behind &lt;em&gt;Continuous Deployment&lt;/em&gt; is that increasing the frequency of deployment of your changes to production will reduce the friction associated with it. On the contrary, &lt;em&gt;deployment&lt;/em&gt; is often an activity that gets neglected until the last minute: it is perceived more as a necessary evil rather than an inherent part of a software engineer’s job. However, shifting deployment left, as early as possible in the development life cycle, will help surfacing issues, dependencies and unexpected constraints sooner rather than later.&lt;/p&gt;

&lt;p&gt;For instance, continuously deploying will make it easier to understand which change caused issues, if any, as well as making it easier to recover. Imagine having to scan through hundreds of commit messages in your version control system history to find the change that introduced the issue…&lt;/p&gt;

&lt;p&gt;Automatism is key to achieve continuous deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The project
&lt;/h2&gt;

&lt;p&gt;In this article we’re gonna explore how to leverage tools like &lt;a href="https://docs.gitlab.com/ee/ci/pipelines.html" rel="noopener noreferrer"&gt;GitLab Pipeline&lt;/a&gt;, &lt;a href="https://heroku.com" rel="noopener noreferrer"&gt;Heroku&lt;/a&gt; and &lt;a href="https://docker.com" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; to achieve a simple continuous deployment pipeline.&lt;/p&gt;

&lt;p&gt;Let’s start by creating a simple &lt;em&gt;Hello World&lt;/em&gt; application. For the purpose of this article I’m gonna use &lt;a href="https://create-react-app.dev/docs/getting-started/" rel="noopener noreferrer"&gt;Create React App&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npx create-react-app continuous-deployment
$ cd continuous-deployment
$ npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now that we have a running application, let’s build a Docker image to be able to deploy it to Heroku.&lt;/p&gt;
&lt;h2&gt;
  
  
  The container image
&lt;/h2&gt;

&lt;p&gt;We’re going to write a simple &lt;a href="https://docs.docker.com/engine/reference/builder/" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; to build our app:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:10.17-alpine
COPY . .
RUN sh -c 'yarn global add serve &amp;amp;&amp;amp; yarn &amp;amp;&amp;amp; yarn build'
CMD serve -l $PORT -s build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;First of all, &lt;a href="https://devcenter.heroku.com/articles/container-registry-and-runtime#dockerfile-commands-and-runtime" rel="noopener noreferrer"&gt;two things to keep in mind when building images for Heroku&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containers are &lt;strong&gt;&lt;em&gt;not&lt;/em&gt;&lt;/strong&gt; run with root privileges&lt;/li&gt;
&lt;li&gt;The port to listen on is fed by Heroku into the container and needs to be consumed from an environment variable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can see from the &lt;code&gt;Dockerfile&lt;/code&gt; definition, we are starting the app by passing the &lt;code&gt;PORT&lt;/code&gt; environment variable. We can now test the image locally.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;docker build &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; continuous-deployment:latest
&lt;span class="nv"&gt;$ &lt;/span&gt;docker run &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4444 &lt;span class="nt"&gt;-p4444&lt;/span&gt;:4444
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;-e PORT=4444&lt;/code&gt; specifies which port we’re going to listen to. You can now try your application at &lt;a href="http://localhost:4444" rel="noopener noreferrer"&gt;http://localhost:4444&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additionally, I’ve added a &lt;code&gt;myuser&lt;/code&gt; user at the end of the Dockerfile, just to make sure everything still works with a non-root user.&lt;/p&gt;
&lt;h2&gt;
  
  
  Deploy to Heroku
&lt;/h2&gt;

&lt;p&gt;Before building our continuous deployment pipeline, let’s deploy manually to make sure our image is good. Create a new application on Heroku and give it a name. In my case it’s gonna be &lt;a href="https://cd-alediaferia.herokuapp.com" rel="noopener noreferrer"&gt;cd-alediaferia&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-15.04.37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-15.04.37.png" alt="Screenshot 2019-12-05 at 15.04.37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s tag and push our image to the Heroku Registry after logging in.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;heroku container:login
&lt;span class="nv"&gt;$ &lt;/span&gt;docker tag &amp;lt;image&amp;gt; registry.heroku.com/&amp;lt;app-name&amp;gt;/web
&lt;span class="nv"&gt;$ &lt;/span&gt;docker push registry.heroku.com/&amp;lt;app-name&amp;gt;/web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And release it straight to Heroku:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;heroku container:release &lt;span class="nt"&gt;-a&lt;/span&gt; web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You should now have your app successfully up and running on Heroku at this point.&lt;/p&gt;
&lt;h2&gt;
  
  
  The GitLab Pipeline
&lt;/h2&gt;

&lt;p&gt;In this paragraph, we’re going to configure the pipeline piece on GitLab so that we can continuously deploy our app. Here follows the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file that I have configured for my &lt;a href="https://gitlab.com/alediaferia/continuous-deployment" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;In the above snippet we have defined two jobs: &lt;code&gt;build_image&lt;/code&gt; and &lt;code&gt;release&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;build_image&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This job specifies how to build our Docker image. If you look closely, you’re actually going to notice that I’m not using Docker specifically but &lt;a href="https://github.com/containers/buildah" rel="noopener noreferrer"&gt;Buildah&lt;/a&gt;. &lt;code&gt;Buildah&lt;/code&gt; is an OCI-compliant container building tool that is capable of producing Docker image with some &lt;a href="https://major.io/2019/08/13/buildah-error-vfs-driver-does-not-support-overlay-mountopt-options/" rel="noopener noreferrer"&gt;minor configuration&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;release&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This job performs the actual release by pushing to your Heroku app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional configuration
&lt;/h2&gt;

&lt;p&gt;Before trying our pipeline out, let’s configure the &lt;code&gt;HEROKU_API_KEY&lt;/code&gt; so that it can get picked up by the &lt;code&gt;heroku&lt;/code&gt;cli that we’re going to use in the pipeline definition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-22.52.20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-22.52.20.png" alt="Pipeline Variable Setting"&gt;&lt;/a&gt;GitLab pipeline variable setting&lt;/p&gt;

&lt;h2&gt;
  
  
  Pushing to GitLab
&lt;/h2&gt;

&lt;p&gt;Now that we have set everything up we are ready to push our code to the deployment pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-23.15.34-1024x482.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-05-at-23.15.34-1024x482.png"&gt;&lt;/a&gt;GitLab pipeline in action&lt;/p&gt;

&lt;p&gt;Let’s have a look at the build step that GitLab successfully executed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FGitlab-pipeline-1024x451.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FGitlab-pipeline-1024x451.jpg"&gt;&lt;/a&gt;GitLab pushing to the Heroku Registry&lt;/p&gt;

&lt;p&gt;The first line uses &lt;code&gt;buildah&lt;/code&gt; to build the image. It works pretty much like &lt;code&gt;docker&lt;/code&gt; and I’ve used &lt;code&gt;--iidfile&lt;/code&gt; to export the Image ID to a file that I then read from the command-line in the subsequent invocation.&lt;/p&gt;

&lt;p&gt;The second line simply pushes to the Heroku Registry. Notice how easily I can log in by doing &lt;code&gt;--creds=_:$(heroku auth:token)&lt;/code&gt;: this tells &lt;code&gt;buildah&lt;/code&gt; to use the token provided by Heroku to log into the registry.&lt;/p&gt;

&lt;p&gt;The deployment job, finally, is as easy as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ heroku container:release -a cd-alediaferia web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;My app is finally deployed, and everything happened automatically after my push to &lt;code&gt;master&lt;/code&gt;. This is awesome because I can now continuously deliver my changes to production in a pain-free fashion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cd-alediaferia.herokuapp.com/" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Falediaferia.com%2Fwp-content%2Fuploads%2F2019%2F12%2FScreenshot-2019-12-07-at-12.20.49-1024x748.png"&gt;&lt;/a&gt;My successfully deployed app&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this post. Let me know in the comments and follow me on &lt;a href="https://twitter.com/alediaferia" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; if you want to stay up-to-date about DevOps and Software Engineering practices.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://alediaferia.com/2019/12/07/continuous-deployment-gitlab-docker-heroku/" rel="noopener noreferrer"&gt;This post&lt;/a&gt; appeared first on &lt;a href="https://alediaferia.com" rel="noopener noreferrer"&gt;Ale's main thread&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>docker</category>
      <category>react</category>
    </item>
  </channel>
</rss>
