<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Argonaut</title>
    <description>The latest articles on DEV Community by Argonaut (@argonaut).</description>
    <link>https://dev.to/argonaut</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/argonaut"/>
    <language>en</language>
    <item>
      <title>Cloud Credits: A Guide for Startups To Maximize Benefits and Avoid Pitfalls</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Tue, 25 Jul 2023 09:38:54 +0000</pubDate>
      <link>https://dev.to/argonaut/cloud-credits-a-guide-for-startups-to-maximize-benefits-and-avoid-pitfalls-9k1</link>
      <guid>https://dev.to/argonaut/cloud-credits-a-guide-for-startups-to-maximize-benefits-and-avoid-pitfalls-9k1</guid>
      <description>&lt;p&gt;Scaling your startup using the cloud is easier than ever before. With the availability of cloud credits, startup founders can cover their cloud operational costs for up to two years, choosing their preferred provider. The startup programs offered by the leading cloud providers have played a crucial role in helping numerous startups experience exponential growth and deliver modern cloud-based solutions to customers worldwide.&lt;/p&gt;

&lt;p&gt;Both AWS and GCP offer cloud credits to startups ranging from bootstrapped ventures to those at the Series A stage. If you haven't acquired credits yet, we recommend checking out our article on how to obtain free credits &lt;a href="https://www.argonaut.dev/blog/aws-free-credits"&gt;for AWS&lt;/a&gt; and &lt;a href="https://www.argonaut.dev/blog/gcp-free-credits"&gt;for GCP&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this article, we will explore the ways in which these credits can benefit startups. We will then delve into a detailed understanding of the programs and provide insights on how to maximize their potential. Lastly, we will highlight common pitfalls to avoid when utilizing cloud credits.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do cloud credits help startups?
&lt;/h2&gt;

&lt;p&gt;The importance of startup cloud credits for early-stage businesses lies in the following aspects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Offsets Initial Infrastructure Costs:&lt;/strong&gt; Startups can optimize their budget by utilizing startup credits provided by cloud service providers, which help offset initial infrastructure costs such as computing resources, storage, network infrastructure, and essential services. This strategy enables startups to reduce upfront expenses and allocate their limited financial resources toward crucial areas such as product development, marketing, and talent acquisition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experimentation with Advanced Cloud Services:&lt;/strong&gt; Startup credits opportunities to explore and experiment with the diverse range of services and tools offered by cloud platforms. Including the latest in AI, ML, Big data, and serverless.  Access to such cutting-edge tools and technologies can give startups a competitive edge, help them innovate, and enhance their overall capabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability and Flexibility:&lt;/strong&gt; Cloud computing provides startups with the ability to scale their operations quickly and efficiently. Startup credits allow early-stage businesses to access scalable resources and infrastructure, ensuring they have the capacity to accommodate growth as their user base and demand increase. The flexibility provided by cloud services helps startups avoid upfront investments in fixed infrastructure and allows them to adjust their resources based on fluctuating requirements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Reliability and Security:&lt;/strong&gt; Startups, especially those in their early stages, may lack the necessary expertise or resources to implement robust security measures on their own. By utilizing startup credits, they can benefit from the established security practices and robust infrastructure provided by cloud providers.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, startup credits provide a valuable opportunity for early-stage businesses to minimize infrastructure costs, experiment with cloud services, and scale their operations efficiently. By leveraging these credits, startups can accelerate their growth, focus on core business activities, and leverage the capabilities and expertise of established cloud service providers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do these programs work?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Application review timeline
&lt;/h3&gt;

&lt;p&gt;AWS reviews applications within 7-10 business days. Microsoft reviews applications within 5-7 business days. GCP reviews applications within 3-5 business days. You will receive an email with further details on redeeming the credits for your respective cloud. If you are rejected, you can reapply after 14 days (Microsoft). Google and AWS don’t openly specify terms for reapplication. You can reach out to their support team to take another shot at the credits. You will usually also receive a reason for rejection along with the rejection email.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tiered credits
&lt;/h3&gt;

&lt;p&gt;Most of the startup credit programs have a tiered credit offering. It starts off by creating an account with the cloud provider of your choice, then you enroll in the credits program, which is usually by filling out a form, and then your application is reviewed. Usually, a basic amount, say $5000 worth of credits, is given to the startup for their first few months of usage. However, if you have more demanding usage, you can request higher credit which will then be evaluated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requesting more credits
&lt;/h3&gt;

&lt;p&gt;In the case of tiered credits, you are allowed to request credits multiple times until you reach the maximum amount ($100,000 for AWS). If you had already been offered $5,000 and in your subsequently applied for $25,000, you will receive the difference between the two credit awards, i.e., $20,000. Each request has to be for an amount higher than the previous request. Requesting can be done through the same form as your initial request. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 See &lt;a href="https://foundershubsupportcenter.powerappsportals.com/article/KA-01143"&gt;the requirements&lt;/a&gt; for the various levels of Microsoft for Startups Founders Hub.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Credits expiry
&lt;/h3&gt;

&lt;p&gt;All cloud credits come with an expiry date. If you received credit codes from a developer community program, they would have to be redeemed within 60 days from when you received them. The expiry date of the credits is usually one year from the moment you apply them to your account. You can also check the credit utilization and expiry date on the billing console. Users will also receive emails when they are about to run out of credits or get closer to the expiry date. &lt;/p&gt;

&lt;p&gt;If you have multiple credits expiring at different times, the one expiring sooner will be applied first to your upcoming bills. More on that &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/useconsolidatedbilling-credits.html"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls to avoid
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Turning a blind eye to cloud costs
&lt;/h3&gt;

&lt;p&gt;With thousands in cloud credits, you may start adding more services from the provider. You may also end up over-provisioning (choosing bigger instance types than you actually need). These practices don’t sting you until the day your credits expire. Some may continue paying for cloud services, which could become wasteful spending. It would also be a bigger and more time-consuming effort to evaluate and optimize these resources later on in your product’s lifecycle.&lt;/p&gt;

&lt;p&gt;Our recommendation is to start off small, only provision the resources, and use services that are essential for your operations. Try to stay within the free tier of certain services whenever possible. Be in control of who has access to provide services and for what purposes. Imagine you are actually paying for the cloud services so that it doesn’t end up as a shocker expense once your credits run out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using latest versions
&lt;/h3&gt;

&lt;p&gt;Though most cloud services nowadays are launched with sufficient documentation, it is recommended to wait 6 months to a year before trying out new services in a prod setting. Dealing with difficulties in setting up or finding public guides to troubleshoot your issues might become more difficult for newer services.&lt;/p&gt;

&lt;p&gt;Our recommendation is to stick to services that you and your team are familiar with or can easily adopt. Choose solutions that have been GA for at least a few months, and have sufficient community support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poor product quality
&lt;/h3&gt;

&lt;p&gt;With free credits for years and investor money in hand, it would surely feel like anything is possible. You can deploy your app frequently and add 20 new features to the roadmap each quarter. This might also lead to you hiring more engineers and creating a false sense of growth. The day you run out of investor money and cloud credits, your bills skyrocket, and you are left firing people and having incomplete features everywhere.&lt;/p&gt;

&lt;p&gt;Our recommendation is to have a lean product management approach and focus on building features that work well for your customer today. Put in equal efforts in marketing and sales to achieve a product market fit before taking on more ambitious feature additions to your product.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overcomplicating your solution
&lt;/h3&gt;

&lt;p&gt;When utilizing cloud credits for your startup, it's crucial to be cautious about overcomplicating the solution. This can happen if there aren’t many experienced engineers/architects on the team. While the cloud offers scalability, flexibility, and cost savings, it's important not to take these advantages for granted. Poorly managing your cloud setup can result in various problems, such as wasted resources, security vulnerabilities, and unnecessary complexity. Moreover, there is a possibility that your credits may run out, leaving you with unexpected expenses or even service outages.&lt;/p&gt;

&lt;p&gt;Our recommendation is to have proper oversight and management of your cloud infrastructure from day one. This entails establishing robust monitoring and alerting systems to effectively track resource usage, performance metrics, and potential security threats. Additionally, if you have limited familiarity with cloud technology, opt for simpler solutions like DigitalOcean.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ignoring best practices
&lt;/h3&gt;

&lt;p&gt;If you believe you are a startup and hence you have to do everything differently, think again. Ignoring best practices in matters such as GitOps, testing,  CI pipelines, security, observability, and Infrastructure as Code will only set you back in the long run.&lt;/p&gt;

&lt;p&gt;Our recommendation is that you try and incorporate best practices for each of these from day one and put in sufficient research before getting started with running your apps on the cloud. Many of these services and their providers also have additional costs associated with them, be mindful of that and include it in your cloud budgeting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Not preparing for the first bill
&lt;/h3&gt;

&lt;p&gt;Your cloud credits will inevitably come to an end, and you may not be ready to digest the huge amount that shows up on your first cloud bill. If you have had a product for several months now or even years, it is likely that you already have a way of funding your operations. &lt;/p&gt;

&lt;p&gt;Our recommendation is to be prepared for the bill by keeping your team informed about the usage and costs incurred. Export the cost data from the start and constantly monitor how it changes as you scale. Explore if there are other cheaper alternatives and experiment with them before going into your first bill. &lt;/p&gt;

&lt;p&gt;Use open-source solutions where possible and third-party data stores so you can step away from the cloud provider and leverage another year or two of free credits from another cloud provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, leveraging cloud credits has become a game-changer for startups looking to scale their operations. By utilizing these credits, startups can offset initial infrastructure expenses, experiment with advanced cloud services, and benefit from scalability and flexibility. Moreover, cloud credits provide enhanced reliability, security, and access to cutting-edge technologies. &lt;/p&gt;

&lt;p&gt;However, it's important to be mindful of potential pitfalls such as overlooking cloud costs, overcomplicating solutions, and ignoring best practices. By being proactive, prepared, and strategic in utilizing cloud credits, startups can accelerate their growth, focus on core business activities, and maximize their potential for success in the cloud computing realm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.argonaut.dev/"&gt;Argonaut&lt;/a&gt; platform enables you to build your apps on AWS and GCP, while acting as a central place for managing infra resources, deployment pipelines, app configurations, and more!&lt;/p&gt;

</description>
      <category>startup</category>
      <category>credits</category>
      <category>productivity</category>
      <category>aws</category>
    </item>
    <item>
      <title>Helm Guide: An Introduction to the Kubernetes Package Manager</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Fri, 16 Jun 2023 04:51:33 +0000</pubDate>
      <link>https://dev.to/argonaut/helm-guide-an-introduction-to-the-kubernetes-package-manager-4jbm</link>
      <guid>https://dev.to/argonaut/helm-guide-an-introduction-to-the-kubernetes-package-manager-4jbm</guid>
      <description>&lt;p&gt;Helm has become an essential part of the Kubernetes ecosystem. By using Helm, one can simplify the process of creating and deploying Kubernetes resources. In this article, we walk through the basic components of Helm, its architecture, and the benefits of using Helm. Then we have a tutorial on deploying Helm charts using Argonaut.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Helm?
&lt;/h2&gt;

&lt;p&gt;Helm is a package manager for Kubernetes that simplifies application deployment and management. It enables users to define, install, and upgrade complex applications with a single command. Helm offers a user-friendly design suitable for beginners and experts and a vast library of ready-to-use charts for effortless installation and management of diverse applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key concepts in Helm
&lt;/h2&gt;

&lt;p&gt;Helm manages the deployment lifecycle of applications using Helm Charts, which ensures consistency across different environments and users. Users can &lt;a href="https://helm.sh/docs/helm/helm_create/"&gt;create their own&lt;/a&gt; Helm charts for deployment or utilize charts for third-party and open-source tools from public repositories, such as &lt;a href="https://artifacthub.io/"&gt;artifacthub&lt;/a&gt;, &lt;a href="https://bitnami.com/stacks/helm"&gt;bitnami charts&lt;/a&gt;, &lt;a href="https://github.com/goharbor/harbor"&gt;harbor&lt;/a&gt;, and &lt;a href="https://chartmuseum.com/"&gt;chart museum&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Helm chart
&lt;/h3&gt;

&lt;p&gt;A Helm chart is a collection of files describing the resources and dependencies needed to deploy an application on Kubernetes. It allows for modularization and versioning, making application distribution, sharing, and management more accessible across various clusters and users.&lt;/p&gt;

&lt;p&gt;The package consists of multiple files and directories, each with a specific function. Helm reads the chart and generates the necessary Kubernetes manifests based on the provided configurations (values.yaml file).&lt;/p&gt;

&lt;p&gt;Helm charts can have dependencies, called &lt;a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/"&gt;subcharts&lt;/a&gt; which are stored in the &lt;code&gt;charts/&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure of a Helm chart
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;charts/&lt;/td&gt;
&lt;td&gt;Directory&lt;/td&gt;
&lt;td&gt;Location for chart dependencies managed manually.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;templates/&lt;/td&gt;
&lt;td&gt;Directory&lt;/td&gt;
&lt;td&gt;These template files written in Golang are merged with values.yaml configuration data&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;templates/NOTES.txt (optional)&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;A plain text file containing short usage notes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;crd/&lt;/td&gt;
&lt;td&gt;Directory&lt;/td&gt;
&lt;td&gt;Store CRDs that will be installed during a helm install&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Chart.yaml&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;Metadata about the chart, such as the version, name, search keywords, etc.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LICENSE (optional)&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;Chart's license in plain text format.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;README.md (optional)&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;Important information for using the chart in a human-readable format.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;requirements.yaml (optional)&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;A list of chart’s dependencies&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;values.yaml&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;The default configuration values&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;values.schema.json (optional)&lt;/td&gt;
&lt;td&gt;File&lt;/td&gt;
&lt;td&gt;A JSON Schema for imposing a structure on the values.yaml file&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Helm releases
&lt;/h3&gt;

&lt;p&gt;The next important component in the Helm architecture are &lt;code&gt;releases&lt;/code&gt;. Releases in Helm represent the instances of a deployed Chart within a Kubernetes cluster. A release constitutes of all the Kubernetes objects and resources, such as deployments, services, and ingress rules, which are created as part of the specified configuration in the chart.&lt;/p&gt;

&lt;h3&gt;
  
  
  Helm chart repository
&lt;/h3&gt;

&lt;p&gt;Helm chart repositories, or repos, are dedicated HTTP servers that host and serve charts alongside an index.yaml file, which provides information about a collection of charts and their download locations. &lt;br&gt;
A Helm client can connect to multiple chart repositories, initially configured with none by default. Using the &lt;code&gt;helm repo add&lt;/code&gt; command, users can effortlessly configure and add new chart repositories, enabling seamless access to and management of various charts for their Kubernetes deployments.&lt;/p&gt;

&lt;p&gt;Popular chart repositories are &lt;a href="https://artifacthub.io/"&gt;artifcathub&lt;/a&gt;, &lt;a href="https://bitnami.com/stacks/helm"&gt;bitnami charts&lt;/a&gt;, &lt;a href="https://github.com/goharbor/harbor"&gt;harbor&lt;/a&gt;, and &lt;a href="https://chartmuseum.com/"&gt;chart museum&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chart version
&lt;/h3&gt;

&lt;p&gt;Every chart must have a version number. Packages in repositories are identified by name plus version. Helm charts are versioned according to the &lt;a href="https://semver.org/"&gt;SemVer2 spec&lt;/a&gt;. For example, an nginx chart whose version field is set to version: 1.2.3 will be named: &lt;code&gt;nginx-1.2.3.tgz&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The version number is found in the Chart.yaml file and is used by various Helm tools, including the CLI. When creating a package, the &lt;code&gt;helm package&lt;/code&gt; command uses the version number from the &lt;code&gt;Chart.yaml&lt;/code&gt; in the package name. The system expects the version number in the chart package name to match the one in the &lt;code&gt;Chart.yaml&lt;/code&gt;, and any discrepancy will cause an error.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chart dependency
&lt;/h3&gt;

&lt;p&gt;In Helm, one chart may depend on any number of other charts. These dependencies can be added in two ways - by dynamically linking using the dependencies field in &lt;code&gt;Chart.yaml&lt;/code&gt; or by bringing it into the &lt;code&gt;charts/&lt;/code&gt; directory and managing manually.&lt;/p&gt;

&lt;p&gt;Example using dependencies field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apache&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.2.3&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://example.com/charts&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.2.1&lt;/span&gt;
    &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://another.example.com/charts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example using charts/ :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;wordpress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;Chart.yaml&lt;/span&gt;
  &lt;span class="s"&gt;# ...&lt;/span&gt;
  &lt;span class="s"&gt;charts/&lt;/span&gt;
    &lt;span class="s"&gt;apache/&lt;/span&gt;
      &lt;span class="s"&gt;Chart.yaml&lt;/span&gt;
      &lt;span class="s"&gt;# ...&lt;/span&gt;
    &lt;span class="s"&gt;mysql/&lt;/span&gt;
      &lt;span class="s"&gt;Chart.yaml&lt;/span&gt;
      &lt;span class="s"&gt;# ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Release number (release version)
&lt;/h3&gt;

&lt;p&gt;A release can be modified several times. To keep track of these changes, a continuous counter is utilized. Upon the initial &lt;code&gt;helm install&lt;/code&gt;, the release number is set to 1. With each subsequent upgrade or rollback, the release number increases by 1. This history is useful if one needs to roll back to a previous release number.****&lt;/p&gt;

&lt;h3&gt;
  
  
  Helm Rollbacks
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;helm rollback &amp;lt;RELEASE&amp;gt; [REVISION] [flags]&lt;/code&gt;  command can be used to roll back to any previous version of the release. Note: a rolled back release will receive a new release number. You can find a list of &lt;a href="https://helm.sh/docs/helm/helm_rollback/#helm"&gt;flags here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Helm library (or SDK)
&lt;/h3&gt;

&lt;p&gt;The Helm Library (or SDK) refers to the Go code that interacts directly with the Kubernetes API server to install, upgrade, query, and remove Kubernetes resources. It can be imported into a project to use Helm as a client library instead of a CLI. &lt;/p&gt;

&lt;p&gt;This is an advanced technique released in Helm 3. You can check the official &lt;a href="https://pkg.go.dev/helm.sh/helm/v3#section-readme"&gt;docs here&lt;/a&gt;. And &lt;a href="https://blog.devops.dev/helm-inside-your-code-helm-sdk-51c0e023f872"&gt;examples here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Helm architecture
&lt;/h2&gt;

&lt;p&gt;This diagram better explains how Helm uses charts and value files to manage releases (deployed resources) in your Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yvwaNG9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ljxzsmuewhbqpmegaoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yvwaNG9A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ljxzsmuewhbqpmegaoy.png" alt="Helm architecture" width="800" height="835"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram shows the OpenTelemetry Operator Helm chart workflow. Here you see how the AWS Observability team builds and maintains the Helm chart in a public repo, and it can be seamlessly downloaded and deployed to users’ clusters.&lt;/p&gt;

&lt;p&gt;This process also has several benefits compared to the previous methods of deploying an OpenTelemetry operator.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Added functionality such as installing/uninstalling packages, upgrades, rollbacks, and customized installations.&lt;/li&gt;
&lt;li&gt;User flexibility to configure values through the values.yaml file, you determine which values to pass to the OpenTelemetry Operator Helm chart configuration. You can override multiple values with one command.&lt;/li&gt;
&lt;li&gt;It’s the easiest way to deploy the Operator to Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more information, check out this &lt;a href="https://aws.amazon.com/blogs/opensource/building-a-helm-chart-for-deploying-the-opentelemetry-operator/"&gt;AWS blog&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KL3ddKgV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xso58dnwmkbwg3leh9wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KL3ddKgV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xso58dnwmkbwg3leh9wx.png" alt="AWS Helm chart workflow" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why you should use Helm?
&lt;/h2&gt;

&lt;p&gt;There are several ways to deploy and manage resources on Kubernetes; why should you choose Helm? Popular alternatives are &lt;a href="https://github.com/kubernetes-sigs/kustomize"&gt;Kustomize&lt;/a&gt;, &lt;a href="https://github.com/grafana/tanka"&gt;Tanka&lt;/a&gt;, and &lt;a href="https://carvel.dev/"&gt;Carvel&lt;/a&gt;, all of which have less mature communities than Helm and lack the number of publicly available charts (packages).&lt;/p&gt;

&lt;p&gt;Helm has emerged as the clear winner with its ability to handle both simple and complex configurations, versioning, reusability, etc.&lt;/p&gt;

&lt;p&gt;There are also ways to use &lt;a href="https://trstringer.com/helm-kustomize/"&gt;Helm and Kustomize&lt;/a&gt; together.&lt;/p&gt;

&lt;p&gt;Here are the key benefits of Helm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt;: Defining, installing, upgrading, and rolling back complex Kubernetes applications can be done with a single command. This greatly simplifies the management and deployment of Kubernetes resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reusability&lt;/strong&gt;: Helm charts are essentially packages of pre-configured Kubernetes resources. The charts can be reused across projects and shared with the wider community.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configurability:&lt;/strong&gt; Helm provides a highly configurable structure with Charts(templates) and values(configs). Just by changing a few parameters, we can use the same chart for deploying on multiple environments like stag/prod or multiple cloud providers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: Helm charts provide a standardized way of packaging and deploying Kubernetes resources. This can help ensure consistency across different environments and reduce the risk of errors or inconsistencies in deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: With Helm, you can easily scale your Kubernetes applications up or down by adjusting the values in the &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community&lt;/strong&gt;: Helm has a large and active community that is constantly developing and improving the tool. This means that there are many resources and best practices available to help you get the most out of Helm.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tutorial - Deploy Helm charts using Argonaut
&lt;/h2&gt;

&lt;p&gt;Here’s a quick demo of deploying a Helm chart of OSS and third-party tools to your Kubernetes cluster using Argonaut. Argonaut’s platform helps you simplify cloud deployments and infra management. You can choose any publicly available Helm chart and deploy it to your Kubernetes cluster on AWS and GCP cloud.&lt;/p&gt;

&lt;p&gt;The beauty of using Argonaut is that you don’t need to use any Helm commands. You can set the chart configs, version, and edit &lt;code&gt;values.yaml&lt;/code&gt; using our simple UI. Your Kubernetes resources are then deployed using GitOps best practices using ArgoCD under the hood. Also, when you create a cluster using Argonaut, we automatically add essential apps such as Keda, Kubernetes events exporter, Nginx-Ingress, Cert-manager, Prometheus, and metrics-server.&lt;/p&gt;

&lt;p&gt;In this example, we will be deploying &lt;strong&gt;rabbitmq&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Argonaut account&lt;/li&gt;
&lt;li&gt;AWS/GCP account connected to Argonaut&lt;/li&gt;
&lt;li&gt;A k8s cluster created using/imported to Argonaut&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Deploying Helm chart
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Select your environment from the Argonaut &lt;a href="https://ship.argonaut.dev/dashboard/environments"&gt;dashboard&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;In your selected cluster, click on the &lt;code&gt;Add-ons&lt;/code&gt; button to see the pre-installed apps
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0YkTUkil--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0navy8lgi4mtiaue4a5w.png" alt="Add on Apps" width="800" height="226"&gt;
&lt;/li&gt;
&lt;li&gt;Click on &lt;code&gt;Application +&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;From Library&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ensure your environment and cluster names are correct&lt;/li&gt;
&lt;li&gt;Select the &lt;code&gt;Custom-Apps&lt;/code&gt; option

&lt;ol&gt;
&lt;li&gt;Use the default &lt;code&gt;tools&lt;/code&gt; namespace&lt;/li&gt;
&lt;li&gt;Set release name as &lt;code&gt;my-rabbitmq&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set chart name as &lt;code&gt;rabbitmq&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Repo URL as &lt;code&gt;https://charts.bitnami.com/bitnami&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The latest chart version will be automatically fetched; you can change to an older version if needed&lt;/li&gt;
&lt;li&gt;Click on load &lt;code&gt;values.yaml&lt;/code&gt; to see the values file&lt;/li&gt;
&lt;li&gt;You can modify as needed and add overrides or annotations
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rfwwDD2g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjb49g5tj9mtexfbkxak.png" alt="Editing the Values file" width="800" height="547"&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Click &lt;code&gt;Install&lt;/code&gt;. Rabbitmq is now added to your cluster!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s just as simple to install all other apps using Helm charts. Just ensure the chart name is the same as the one in the repo. And to find the correct Helm repo URLs for your apps, you can search &lt;a href="https://artifacthub.io/"&gt;https://artifacthub.io/&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 If you want to try installing it using Helm CLI commands, here is a &lt;a href="https://getbetterdevops.io/helm-quickstart-tutorial/"&gt;useful tutorial&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Helm has proven to be an invaluable tool for managing Kubernetes applications. With its powerful features, such as simplicity, reusability, configurability, consistency, and scalability, Helm enables users to harness the full potential of Kubernetes without a steep learning curve. Additionally, the active and growing community behind Helm ensures that it will continue to improve and remain the go-to solution for deploying and managing complex Kubernetes resources.&lt;/p&gt;

&lt;p&gt;By using tools like Argonaut alongside Helm charts, you can simplify your cloud deployments even further and streamline your infrastructure management processes. As demonstrated in our tutorial on deploying RabbitMQ with Argonaut.&lt;/p&gt;

&lt;p&gt;Ultimately, if you're looking to deploy and manage Kubernetes applications efficiently while minimizing errors and inconsistencies in deployment processes, consider adopting Helm as your go-to package manager.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;References for this article:&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://helm.sh/docs/glossary/"&gt;https://helm.sh/docs/glossary/&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://circleci.com/blog/what-is-helm"&gt;https://circleci.com/blog/what-is-helm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://trstringer.com/helm-kustomize/"&gt;https://trstringer.com/helm-kustomize/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/opensource/building-a-helm-chart-for-deploying-the-opentelemetry-operator/"&gt;https://aws.amazon.com/blogs/opensource/building-a-helm-chart-for-deploying-the-opentelemetry-operator/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://helm.sh/docs/intro/quickstart/"&gt;https://helm.sh/docs/intro/quickstart/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Engineering: A Technical Exploration of Argonaut's Notifications System</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Tue, 30 May 2023 04:54:49 +0000</pubDate>
      <link>https://dev.to/argonaut/engineering-a-technical-exploration-of-argonauts-notifications-system-jn2</link>
      <guid>https://dev.to/argonaut/engineering-a-technical-exploration-of-argonauts-notifications-system-jn2</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is written with massive help from &lt;a href="https://prajjwal.me/"&gt;Prajjwal Dimri&lt;/a&gt; who built the Notifications feature.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We are excited to add Notifications to Argonaut. Notifications allow users to get instant updates on any build and deploy actions that occur in a user’s org. This blog is a walk-through of our approach, details of our architecture, along with use cases for this feature.&lt;/p&gt;

&lt;p&gt;There are three layers in our architecture: The transformation layer, the processing layer, and the fanout layer. We built the transformation and processing layer in house and used &lt;a href="https://novu.co/"&gt;Novu&lt;/a&gt; for the fanout layer. It helped us save time by avoiding reengineering this component, which is fairly consistent in most notification systems across various products. It provided us with additional benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easy integrations with communication channels that our users are on.&lt;/li&gt;
&lt;li&gt;Web widget out of the box, making the integration into Argonaut seamless.&lt;/li&gt;
&lt;li&gt;User level control for subscribe/unsub for in-app and email notifications.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notifications play a crucial role in keeping you and your team informed about build and deploy stage updates in real-time. We recognize that the effectiveness of notifications hinges on their promptness, delivery through optimal channels, and appropriate frequency. Consequently, we have designed a customizable notifications system that allows you to receive only the desired notifications through your preferred medium, such as Slack, Teams, Discord, or email.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases
&lt;/h2&gt;

&lt;p&gt;You get a lot of useful information at a glance! The notifications look something like this. And clicking on any notification takes you directly to the respective screen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aPZvkQPc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbukfxqzq14ptp3evxad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aPZvkQPc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kbukfxqzq14ptp3evxad.png" alt="Notifications in Argoanut UI" width="800" height="847"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We currently provide different kinds of updates for Application Developers. With the addition of events catering to infra coming soon.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role in your company&lt;/th&gt;
&lt;th&gt;Get updates on&lt;/th&gt;
&lt;th&gt;So that you can&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Application Developer&lt;/td&gt;
&lt;td&gt;High-priority notifications in the case of my builds or deployment pipelines failing&lt;/td&gt;
&lt;td&gt;Fix the errors quickly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application Developer&lt;/td&gt;
&lt;td&gt;High-priority notifications in the case of pods failing to start after a new deployment&lt;/td&gt;
&lt;td&gt;Fix the errors quickly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Application Developer&lt;/td&gt;
&lt;td&gt;Opt-in to receive notifications when pipelines succeed&lt;/td&gt;
&lt;td&gt;Be confident that the deployments went through&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;👇 Coming soon 👇&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevOps Engineer&lt;/td&gt;
&lt;td&gt;Pods take a very high amount of resources (close to the set limit) in the cluster&lt;/td&gt;
&lt;td&gt;Get this fixed by allocating higher limits or getting the dev team to lower the resource usage of the pod&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevOps Engineer&lt;/td&gt;
&lt;td&gt;Pods getting evicted&lt;/td&gt;
&lt;td&gt;Fix it by allocating higher limits or getting the dev team to lower the resource usage of the pod&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevOps Engineer&lt;/td&gt;
&lt;td&gt;Cluster spinning up more nodes than the desired nodes config&lt;/td&gt;
&lt;td&gt;Check why more nodes are getting spun up and how this is affecting our costs in our cloud provider&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevOps Engineer&lt;/td&gt;
&lt;td&gt;Get high-priority notifications if any infra-CRUD operations fail&lt;/td&gt;
&lt;td&gt;Fix and restart the infra operation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A Manager&lt;/td&gt;
&lt;td&gt;Get notified when environments are created or deleted.&lt;/td&gt;
&lt;td&gt;Be notified of resource usage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A Manager&lt;/td&gt;
&lt;td&gt;VCS &amp;amp; cloud account integrations are added or deleted&lt;/td&gt;
&lt;td&gt;Be in sync about users’ actions in the org&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Notifications architecture
&lt;/h2&gt;

&lt;p&gt;Here’s a quick overview of our implementation. The process can be broken down into three layers: The transformation layer, the processing layer, and the fanout layer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HhASiT6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/geylc9n364dti96znku5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HhASiT6O--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/geylc9n364dti96znku5.png" alt="Architecture of notifications in Argonaut" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Events
&lt;/h3&gt;

&lt;p&gt;Actions by users, such as creating new apps, updating a build or deploy config, and triggering a pipeline, are all considered to be events. We use event driven programming in our backend, where our internal services communicate with each other through publishing and consuming events from a queue. For example, when a build is triggered, running, gets completed or fails, the build service publishes these events to the queue which can be consumed by various other services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transformer
&lt;/h3&gt;

&lt;p&gt;The notification transformer listens to a particular topic in the queue, processes the message, and transforms it into a standard format (including information about the org and the user).&lt;/p&gt;

&lt;p&gt;Transformers have the capability to deal with various types of events getting generated in the system and will also have the logic to assign priorities to various types of events. (priority queue coming soon)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standard notification format&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key Name&lt;/th&gt;
&lt;th&gt;Key Type&lt;/th&gt;
&lt;th&gt;Key Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;type&lt;/td&gt;
&lt;td&gt;EventType (Enum)&lt;/td&gt;
&lt;td&gt;Type of event that caused this payload to be created&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;priority&lt;/td&gt;
&lt;td&gt;number&lt;/td&gt;
&lt;td&gt;Priority for the event.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;users&lt;/td&gt;
&lt;td&gt;Array (User Ids)&lt;/td&gt;
&lt;td&gt;The users for which this event is relevant&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;organizations&lt;/td&gt;
&lt;td&gt;Array (Org Ids)&lt;/td&gt;
&lt;td&gt;The organizations for which this event is relevant&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;createdAt&lt;/td&gt;
&lt;td&gt;Date-Time&lt;/td&gt;
&lt;td&gt;Event creation timestamp&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;metadata&lt;/td&gt;
&lt;td&gt;JSON&lt;/td&gt;
&lt;td&gt;Additional metadata and information relevant to the event&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;We use &lt;a href="https://docs.novu.co/platform/subscribers/"&gt;Novu subscriber&lt;/a&gt; in the fanout layer. Therefore, we internally map each org/user with the Novu subscriberId.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Processor
&lt;/h3&gt;

&lt;p&gt;The processor takes the information about the user and the org and retrieves the map, and changes the ID to match the Novu subscriber ID, which is required in the next step to connect with the Fan Out layer and ensure the notifications are routed to the right users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fanout layer
&lt;/h3&gt;

&lt;p&gt;The final layer is the Fanout layer, which is managed by Novu. Using Novu made our integration with Slack, Teams, email, and Discord a breeze. The fanout layer (Novu) receives the payload and sends it to the subscribers (members in the same workspace).&lt;/p&gt;

&lt;p&gt;Note: Every user and org ID is mapped out to respective &lt;code&gt;subscriberIds&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="n"&gt;defaultFanOutService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;TriggerFanOut&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;pkgCommons&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;eventId&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subscriberIds&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="k"&gt;map&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="k"&gt;interface&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IError&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="n"&gt;novuClient&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;d&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;getNovuClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;novuClient&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;EventApi&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Trigger&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;eventId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;novu&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ITriggerPayloadOptions&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;

&lt;span class="n"&gt;To&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;subscriberIds&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;span class="n"&gt;Payload&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;

&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Journey of a Notification
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;An event occurs and is published in Argonaut’s internal backend service.&lt;/li&gt;
&lt;li&gt;The event is added to the SNS queue.&lt;/li&gt;
&lt;li&gt;The appropriate mapped notification transformer retrieves the event.&lt;/li&gt;
&lt;li&gt;Transformer decides if the event should be processed or not.&lt;/li&gt;
&lt;li&gt;The transformer fetches all the users and organizations for which this event is relevant. For example, In case of a failed build event, the relevant org and its users will be fetched.&lt;/li&gt;
&lt;li&gt;A notification processor (goroutine)is spawned and it picks up the notification.&lt;/li&gt;
&lt;li&gt;Processor loops over the given users and organizations.&lt;/li&gt;
&lt;li&gt;Every iteration, it fetches the subscription preferences, creates a payload to trigger Novu, and dispatches the payload.&lt;/li&gt;
&lt;li&gt;Novu does its magic and sends notifications to users through their selected channels.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Experience using Novu
&lt;/h2&gt;

&lt;p&gt;In our setup, Novu forms the fanout layer, and its main responsibility is sending out notifications to various platforms, e.g., Slack, Email, UI, etc. We are currently using &lt;a href="https://novu.co/pricing/"&gt;Novu Cloud&lt;/a&gt;, which has a free tier supporting up to 10K events per month.&lt;/p&gt;

&lt;p&gt;We use &lt;a href="https://docs.novu.co/platform/templates/"&gt;Templates&lt;/a&gt; offered by Novu, which ties in all the different channels under a single entity. This provides a consistent experience for both the developer and the user, regardless of the channel they choose to receive the notification on.&lt;/p&gt;

&lt;p&gt;Entry of notifications and preferences of subscribers is stored in Novu’s MongoDB Database. Notification Aggregation (Digest) is also handled by Novu.&lt;/p&gt;

&lt;p&gt;We preferred Novu’s solution over Courier &amp;amp; Knock as it was more economical and it is also an Open Source project.&lt;/p&gt;

&lt;p&gt;Novu’s active &lt;a href="https://discord.com/invite/novu"&gt;discord community&lt;/a&gt; was supportive when we had queries regarding implementation. We were also able to add &lt;a href="https://github.com/novuhq/go-novu/pulls?q=is%3Apr+author%3Aprajjwaldimri+"&gt;our contributions&lt;/a&gt; to their Go lang SDK.&lt;/p&gt;

&lt;p&gt;We’ve also used Novu’s front-end react client to show notifications within the Argonaut UI. It was easy to use and configure according to our brand colors and design, the configuration of which took under 1 hour.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upcoming features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;External event integrations where we would add a Webhook processor to receive notifications from external services and third-party tools.&lt;/li&gt;
&lt;li&gt;Adding a priority queue so that high-priority issues like pod failures and cost escalations are shared with the user instantly.&lt;/li&gt;
&lt;li&gt;Notifications for more events catering to different types of users. Such as infra updates, cost, pod utilization, billing, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, adding Notifications to Argonaut has greatly improved the user experience by giving quick updates on build and deploy actions. Using Novu, we've created a smooth and personalized notification system with options for updates through channels like Slack, Teams, Discord, or email.&lt;/p&gt;

&lt;p&gt;Our implementation includes the Process layer, Transform layer, and Fan out layer, making sure each notification is useful, prompt, and correct. Working with Novu has led to a more cost-effective solution. We're sure our users will appreciate this new feature, and we're eager to keep enhancing and growing our offerings to meet the various needs of our users&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>notification</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Set Up Your Internal Developer Platform With These Open-Source Solutions</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Wed, 19 Apr 2023 07:22:38 +0000</pubDate>
      <link>https://dev.to/argonaut/set-up-your-internal-developer-platform-with-these-open-source-solutions-3nhc</link>
      <guid>https://dev.to/argonaut/set-up-your-internal-developer-platform-with-these-open-source-solutions-3nhc</guid>
      <description>&lt;p&gt;Internal Developer Platforms (IDPs) are revolutionizing the way developers work by automating repetitive tasks, standardizing workflows, and reducing the time spent on infrastructure management. This not only leads to increased productivity but also fosters a culture of collaboration, innovation, and growth within the team. In this article, we dive into the world of open-source solutions that help you elevate your development process, streamline workflows, and improve efficiency. In other words, these solutions put together can provide you with the benefits of an IDP.&lt;/p&gt;

&lt;p&gt;While there are several &lt;a href="https://www.argonaut.dev/blog/internal-developer-platform-guide#commercially-available-internal-developer-platforms"&gt;SaaS products&lt;/a&gt; like Argonaut, there might be org specific needs because of which bespoke tooling is needed. In such cases, open-source solutions can be leveraged to build your IDP. In this article, we explore the tools that can be used to build an IDP.&lt;/p&gt;

&lt;p&gt;By leveraging open-source software to build your IDP, you benefit from a vast community of developers and documentation. However, in reality, there is no plug-and-play OSS IDP and some assembly is required. Some of the reasons are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Unique requirements: Your organization’s requirements vastly differ from other organizations that use IDPs. Matching these features to feature would usually require building a custom solution or severely modifying an available OSS.&lt;/li&gt;
&lt;li&gt;Open-source solutions rely on community support. Therefore, ensuring there’s an active community or the resources required to keep it up-to-date is important.&lt;/li&gt;
&lt;li&gt;Integration challenges: It is unlikely to find open-source solutions that seamlessly integrate with your company's existing tools, technologies, and infrastructure. This could lead to additional time and effort spent on customizing and integrating the solutions into your IDP, which may not always be feasible. Some OSS tools also lack proper documentation and don’t offer support, which could make implementing them even more challenging and costly.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tooling for an Internal Developer Platform
&lt;/h2&gt;

&lt;p&gt;The minimal requirements for an effective Internal Developer Platform (IDP) are an effective IaC solution, GitOps tooling, and a service catalog. There can be &lt;a href="https://internaldeveloperplatform.org/platform-tooling/"&gt;several other tools&lt;/a&gt;, such as monitoring, databases, CI tools, and security, as a part of your IDP. These together help your dev team automate repetitive tasks and standardize workflows.&lt;/p&gt;

&lt;p&gt;Here we explore popular tools in each of these categories, along with alternatives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining Infrastructure as Code (IaC)
&lt;/h3&gt;

&lt;p&gt;An Infrastructure as Code tool is an essential component of an IDP; it enables managing infra at scale and in a declarative way. It provides &lt;a href="https://www.argonaut.dev/blog/infrastructure-as-code-tools#the-benefits-of-iac"&gt;several benefits&lt;/a&gt;, like collaborating more effectively, managing complex cloud complexities, and improving consistency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Crossplane&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Crossplane is a framework for building cloud-native control planes without the need to write code. It provides the building blocks that enable you to provision, compose, and consume infrastructure with the Kubernetes API. Its ability to work as a control plane, interact with multiple services across vendors, and create &lt;a href="https://docs.crossplane.io/v1.9/concepts/composition/"&gt;custom&lt;/a&gt; and &lt;a href="https://docs.crossplane.io/v1.9/getting-started/provision-infrastructure/"&gt;composite resources&lt;/a&gt; make it suitable for IDPs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Set up Crossplane in your Kubernetes cluster (&lt;a href="https://docs.crossplane.io/v1.11/software/install/"&gt;Helm Install&lt;/a&gt;) to manage and provision infrastructure resources. You can then define custom resources for the cloud services and infrastructure components you want to manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terraform is a powerful and flexible Infrastructure as Code (IaC) tool that can be used to build, manage, and scale internal developer platforms. Its extensibility, multi-cloud support, declarative configuration, Kubernetes-native architecture, and composability make it an ideal choice for organizations looking to streamline their infrastructure management processes and empower their development teams.&lt;/li&gt;
&lt;li&gt;Pulumi is a versatile IaC tool designed with cloud-native applications in mind. It allows developers to manage and provision infrastructure using familiar programming languages. Its versatility, multi-cloud support, and integration with Kubernetes make it an ideal solution for constructing internal developer platforms (IDPs). By leveraging Pulumi's language-specific SDKs and reusable components, teams can efficiently collaborate, standardize infrastructure configurations, and create customized resources tailored to their specific needs.
&amp;gt; 💡 For more IaC tools, check out our &lt;a href="https://www.argonaut.dev/blog/infrastructure-as-code-tools"&gt;Top IaC tools&lt;/a&gt; article. And here’s an &lt;a href="https://www.argonaut.dev/blog/comprehensive-iac-comparison"&gt;in-depth comparison&lt;/a&gt; between Pulumi, Terraform, and AWS CloudFormation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Continuous Delivery (CD)
&lt;/h3&gt;

&lt;p&gt;Continuous delivery tools are crucial in IDPs, enabling faster and more reliable software releases. By automating the deployment process, teams can minimize human errors, enhance collaboration, and maintain consistent quality throughout the development lifecycle. This ultimately accelerates innovation and increases overall efficiency within an organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ArgoCD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/argoproj/argo-cd"&gt;ArgoCD&lt;/a&gt; is an open-source Continuous Delivery tool for automating application deployment in Kubernetes clusters. Utilizing the GitOps methodology, it monitors Git repositories to synchronize the desired state with the live environment, ensuring efficient and reliable application delivery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Set up a Git repository containing the desired state of your applications and infrastructure, including the custom resources &lt;a href="https://docs.crossplane.io/v1.10/guides/argo-cd-crossplane/"&gt;defined using Crossplane&lt;/a&gt;. ArgoCD can then be used to manage the deployment and configuration of your applications and infrastructure resources using GitOps methodology.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alternatives&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://fluxcd.io/"&gt;Flux&lt;/a&gt; is a popular GitOps alternative to ArgoCD. It helps you manage deployments, resources, and integrations with various Git providers and provides multi-tenancy support. It uses a cluster operator to start deployments in Kubernetes, so there's no need for another CD tool.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gimlet.io/"&gt;Gimlet&lt;/a&gt; is a command line tool and a dashboard that packages a set of conventions and matching workflows to manage a GitOps developer platform effectively. It is built on top of Helm and Flux and provides you with a paved path, a set of best-practices, so you can focus on your task at hand.
&amp;gt; 💡 &lt;a href="https://www.argonaut.dev/blog/gitops-argocd-vs-flux"&gt;In-depth comparison between ArgoCD and Flux&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Service Catalogs
&lt;/h3&gt;

&lt;p&gt;A service catalog, such as &lt;a href="http://backstage.io/"&gt;backstage.io&lt;/a&gt;, serves as a developer portal that offers a comprehensive view of various applications, services, and resources managed by engineering teams. It includes information about ownership, metadata, and essential service-related links. While it primarily benefits engineers and engineering managers, it does not directly address the need for quicker collaboration between Operations and engineering teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backstage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://backstage.io/"&gt;Backstage&lt;/a&gt; is a comprehensive platform designed to streamline the development process for product teams by centralizing software catalogs, infrastructure tooling, services, and documentation. Key features include the Backstage Software Catalog for managing various software types, Software Templates for creating new projects in alignment with organizational best practices, and TechDocs for seamless technical documentation. Additionally, Backstage offers an expanding ecosystem of open-source plugins for enhanced customization and functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting Up&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install and &lt;a href="https://www.youtube.com/watch?v=d2L6PWGfhXI"&gt;configure Backstage&lt;/a&gt; to provide a unified interface for your developers. Integrate Backstage with Crossplane and ArgoCD, so developers can easily discover, manage, and deploy applications and infrastructure resources through the Backstage portal.&lt;/li&gt;
&lt;li&gt;Leverage &lt;a href="https://backstage.io/plugins/"&gt;Backstage’s custom plugins&lt;/a&gt; and integrations to further tailor the platform to your organization's needs. This can include integrating with other tools, services, and APIs used in your development processes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Alternatives&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.getport.io/"&gt;Port&lt;/a&gt; can be used as an alternative to &lt;a href="http://Backstage.io"&gt;Backstage.io&lt;/a&gt;. It offers a way to build internal developer portals with context-rich software catalogs with maturity and quality&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.atlassian.com/software/compass"&gt;Compass&lt;/a&gt; by Atlassian aids engineering teams in managing software sprawl by offering a single source of truth for distributed architecture. It enables understanding of built components, ownership, operational health, and applied policies. Compass provides insights into problem areas and changes over time, enhancing architecture and development velocity.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.opslevel.com/"&gt;OpsLevel&lt;/a&gt; is a uniform interface that lets developers manage everything from one place, including their tools, services, and systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here we’ve used Crossplane for IaC and as an infrastructure control plane, ArgoCD for Continuous Delivery and GitOps best practices, and Backstage as a service catalog. This combination provides a powerful, customized IDP that simplifies the management of your cloud infrastructure while providing a seamless experience for your developers. &lt;a href="https://www.youtube.com/watch?v=d2L6PWGfhXI"&gt;Here’s a video demo&lt;/a&gt; of one such setup.&lt;/p&gt;

&lt;p&gt;The setup and its complexity may vary depending on your company’s size and the tools you are currently using. It will also require a significant time and development commitment to build and maintain the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add-on capabilities for your IDP
&lt;/h2&gt;

&lt;p&gt;While the combination of the above three tools provides a basic solution for IDP, the below tools can provide more add-on capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes control planes
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="http://Shipa.io"&gt;Shipa.io&lt;/a&gt; is a Kubernetes control plane that provides an abstraction level for deploying clusters while maintaining the same user experience. It requires YAML files for configuration and CI pipeline connections but lacks dynamic workload and environment creation. Shipa is suitable for governance purposes but may not be ideal for building an IDP.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubermatic/kubermatic"&gt;Kubermatic Kubernetes Platform&lt;/a&gt; (KKP) is an open source project that helps centrally manage the global automation of thousands of Kubernetes clusters across multicloud, on-prem and edge with unparalleled density and resilience. KKP is compatible with all major cloud providers even supports custom infrastructure setups. By offering a user-friendly, self-service portal for developers and IT teams, KKP simplifies the complexities of managing cloud-native IT infrastructure and multi-cloud operations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Continuous Integrations (CI)
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://docs.gitlab.com/ee/ci/"&gt;GitLab&lt;/a&gt; is a versatile application that many organizations depend on for tasks like source code management, continuous integration, and deployment. It provides the flexibility of creating and running pipelines with multiple CI/CD stages. &lt;a href="https://docs.gitlab.com/ee/topics/autodevops/index.html"&gt;Auto DevOps&lt;/a&gt; makes using GitLab a breeze.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/features/actions"&gt;GitHub Actions&lt;/a&gt; provides a wide range of pre-built actions, integration with third-party tools, and the ability to create custom actions, making it a versatile solution for implementing CI in any project. With CI, developers can automatically build, test, and validate their code whenever changes are made, ensuring that it is always in a deployable state.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Monitoring and observability
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;ELK stack: Elasticsearch, Logstash, Kibana is a popular open-source stack for log management. It is a powerful tool for collecting, storing, and analyzing log data. Kibana is quite useful for visualizing the data.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://grafana.com/"&gt;Grafana stack&lt;/a&gt;: Grafana, Loki, Tempo is an open-source stack to compose observability dashboards with everything from Prometheus and Graphite metrics to logs and application data. Grafana connects with a plethora of data sources, including Graphite, Prometheus, Influx DB, ElasticSearch, MySQL, and PostgreSQL. It helps to monitor and analyze data and track user and application behavior, including error type and frequency in pre-production and production environments. If you’re ok with the additional overhead, it’s a great way to monitor.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 For more observability tools, check &lt;a href="https://www.argonaut.dev/blog/observability-top-20"&gt;out this article&lt;/a&gt;. Here’s an in-depth comparison between &lt;a href="https://www.argonaut.dev/blog/observability-comparison-2022"&gt;Datadog, New Relic, and Splunk&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Best Practices for Internal Developer Portal
&lt;/h2&gt;

&lt;p&gt;Here are some best practices to consider while creating and maintaining your own Internal Developer platform (IDP).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensuring clear documentation - IDPs are meant to be easy to use and provide self-serve capabilities. Whether they are in-built or purchased solutions, having clear and concise documentation helps its users understand why and how to resolve their issues. A well-documented IDP would also make it easy for new team members to get up to speed and collaborate better.&lt;/li&gt;
&lt;li&gt;Encouraging collaboration and communication among developers - Since IDPs are used by the entire team, understanding who uses what in an IDP is important. These controls can be set using RBAC in most IDPs.&lt;/li&gt;
&lt;li&gt;Monitoring usage and performance to identify areas for improvement - By adding monitoring abilities, team leaders or Ops professionals can get a sense of cloud usage and associated costs. It is also essential to have observability over issues and logs of the various cloud services.&lt;/li&gt;
&lt;li&gt;Regularly updating SDKs, libraries, and code samples - As a part of maintaining the IDP, maintaining the versions of the dependency libraries is important for the developers. This becomes even more important when the IDP is a combination of several tools and needs to meet security standards.&lt;/li&gt;
&lt;li&gt;Integrating user feedback to enhance the overall developer experience - If you’re just starting out, getting feedback from your developer team is the best way to ensure that your IDP meets the organizational requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;IDPs are a way to highlight the golden paths for your team and include best practices when it comes to the development and deployment of both infra and app. Following these five best practices can help you get started in creating an effective and useable platform to elevate your dev team’s productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The significance of Internal Developer Platforms (IDPs) for developer teams is immense, as they offer a wide range of advantages that help streamline various aspects of the development process. By automating repetitive tasks and standardizing workflows, IDPs allow developers to focus on critical aspects of their work, leading to increased productivity and growth. Additionally, IDPs help reduce the time and effort spent on managing infrastructure, allowing teams to concentrate on innovation and delivering high-quality applications.&lt;/p&gt;

&lt;p&gt;To create an effective IDP that caters to their specific needs, organizations should follow best practices and actively seek user feedback. By doing so, they can ensure that their IDP addresses their unique requirements and enhances the overall developer experience. In today's competitive and rapidly evolving technological landscape, embracing the power of IDPs is crucial for organizations aiming to stay ahead of the curve. Adopting a well-designed and efficient IDP can make all the difference in empowering development teams to excel and drive the organization's success.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://ship.argonaut.dev/"&gt;Argonaut&lt;/a&gt; is a DevOps automation platform designed to streamline the management of both applications and infrastructure, enabling engineering teams to accelerate delivery. By incorporating GitOps best practices, Argonaut simplifies the process of creating and maintaining complex cloud setups. With support for Kubernetes application deployment on AWS and GCP, Argonaut offers a single pane to manage all your cloud apps, infra, integrations, and deployment workflows, catering to a wide range of organizational needs. &lt;a href="https://ship.argonaut.dev/"&gt;Try it out&lt;/a&gt; for free today!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>idp</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Internal Developer Platforms: An In-Depth Guide to Unlocking Developer Productivity</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Tue, 11 Apr 2023 18:04:05 +0000</pubDate>
      <link>https://dev.to/argonaut/internal-developer-platforms-an-in-depth-guide-to-unlocking-developer-productivity-41im</link>
      <guid>https://dev.to/argonaut/internal-developer-platforms-an-in-depth-guide-to-unlocking-developer-productivity-41im</guid>
      <description>&lt;h2&gt;
  
  
  What are Internal Developer Platforms (IDPs)?
&lt;/h2&gt;

&lt;p&gt;Internal developer platforms (IDPs) are platforms that developers can leverage to build and deploy their applications to one of the environments. Their main purpose is to increase the pace of development and reduce the developer’s dependence on DevOps and Platform engineers through automation and developer self-service. The platform may be built in-house, usually, an adaptation of an open-source or purchased offering such as platform as a Service (PaaS).&lt;/p&gt;

&lt;p&gt;The adoption has been both recent and rapid. Puppet’s state of platform engineering &lt;a href="https://www.puppet.com/resources/state-of-platform-engineering"&gt;survey&lt;/a&gt; finds that over 51% of companies that have adopted Internal Developer Platforms have done it in the past three years. And an overwhelming majority of the respondents (93%) declared that IDP adoption is a step in the right direction.&lt;/p&gt;

&lt;p&gt;Internal developer platforms are good at helping the team manage applications and infrastructure from one place, providing tight integration with the existing tools and services, and giving developers self-serve and collaboration capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of IDPs
&lt;/h3&gt;

&lt;p&gt;IDPs offer several benefits to the organizations using them. The first relates to improvements in infrastructure and IT management, such as increased productivity stemming from reduced communication times between devs and infra teams. Secondly, they reduce the complexity of the cloud and make it easy for people in your org to pick up on the best practices set forward by your team. In larger organizations, they can also be an easy way of managing RBAC to deployments, infra creation and management, and more.&lt;/p&gt;

&lt;p&gt;Responses from Puppet’s State of Platform Engineering 2023 report also show how IDP has improved &lt;a href="https://www.puppet.com/blog/kpis-devops"&gt;DevOps KPIs&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An increase in development velocity (68%)&lt;/li&gt;
&lt;li&gt;42% say development velocity has improved “a great deal” since they started doing platform engineering&lt;/li&gt;
&lt;li&gt;Improved productivity (59%)&lt;/li&gt;
&lt;li&gt;Improved system reliability (60%)&lt;/li&gt;
&lt;li&gt;Improved security (55%)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Components of IDPs
&lt;/h3&gt;

&lt;p&gt;Let’s dive deeper into the components of IDPs and discuss which parts of the application and infrastructure management process they improve.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Integration with existing tools and services
IDPs are built by integrating with the existing set of tools and services that the company currently uses. These can be your source control systems (e.g., GitHub, GitLab), Continuous Integration and Continuous Deployment (CI/CD) pipelines, and observability tools such as monitoring and logging systems.
In addition, they may also include some form of access control, observability dashboards, and a set of best practices for developers to use.&lt;/li&gt;
&lt;li&gt;Application and infrastructure management
IDPs can be used to automate a lot of the hassle of deployment. By setting up IDPs with GitOps best practices, users can automatically deploy application and infra changes with each commit to the Git code.
Managing multiple environments is another popular use case of IDPs. They can be effective in creating, providing selective access, and managing the use of multiple different environments such as development, testing, pre-prod, and prod. Several tools set up preview environments on the fly.
Their ability to monitor and view deployments across environments helps organizations scale dynamically to changing workloads and maintain consistent performance across their infrastructure.&lt;/li&gt;
&lt;li&gt;Developer self-service capabilities
IDPs empower developers with self-serve capabilities by offering a centralized, user-friendly interface to access tools, resources, and services on demand. These capabilities facilitate faster development cycles, reduce operations teams’ dependency, and enable greater autonomy in seamlessly creating, testing, and deploying applications.
The streamlined workflow also makes it considerably easier for new employees to get started with your stack without diving into the internal workings. Some tools also add a collaboration layer on top that allows different members of a team to work together easily and review changes before deploying the code.&lt;/li&gt;
&lt;li&gt;Collaboration and governance features
IDPs can achieve enhanced security and compliance by providing Role-based access controls (RBAC), ensuring each team member has the appropriate permissions to access specific resources and perform certain actions. Overall, this minimized the risk of unauthorized access or accidental changes while also promoting collaboration and efficient workflows among cross-functional teams.
IDPs help developer teams maintain transparency, accountability, and traceability throughout the development process by providing audit logs and history tracking. This enables easier identification and resolution of issues, as well as ensures compliance with regulatory and organizational requirements for data handling and change management.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Platform engineering
&lt;/h3&gt;

&lt;p&gt;Companies that decide to build their own platform usually hire platform engineers, who are specialized professionals responsible for the creation and continuous improvement of the IDP. They work to implement and maintain tools, services, and best practices that streamline development processes, ensuring a smooth and efficient workflow for the organization's developers.&lt;/p&gt;

&lt;p&gt;Platform engineering is the process of designing, building, and maintaining an IDP that provides a centralized, scalable, and efficient infrastructure for developers within an organization. An IDP simplifies and standardizes application development, deployment, and management, enabling faster and more reliable software delivery.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software development trends that are catalysts for IDP adoption
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Digital transformation: As businesses worldwide increasingly embrace digital transformation, there is a growing need for efficient and robust software development processes. IDPs play a crucial role in enabling companies to stay agile and quickly adapt to the dynamic digital landscape.&lt;/li&gt;
&lt;li&gt;Need for faster software delivery: In today's fast-paced business environment, organizations must deliver new features and applications at an unprecedented speed. IDPs provide a standardized, automated, and centralized platform that accelerates development cycles, enabling companies to stay competitive and responsive to market changes.&lt;/li&gt;
&lt;li&gt;The growing complexity of software architectures: With the rise of microservices, containers, and cloud-native technologies, managing and deploying software has become increasingly complex. IDPs help simplify this complexity by providing a unified platform where developers can build, test, and deploy applications with ease, regardless of the underlying architecture.&lt;/li&gt;
&lt;li&gt;Demand for DevOps and CI/CD practices: The need for IDPs grows as more organizations adopt DevOps and Continuous Integration/Continuous Deployment (CI/CD) practices. IDPs enable seamless collaboration between development and operations teams, automating many manual tasks and ensuring smooth transitions throughout the software development lifecycle.&lt;/li&gt;
&lt;li&gt;Scalability and flexibility: IDPs offer a scalable solution that can accommodate the growing needs of organizations, regardless of their size. They provide a flexible platform that can be easily customized and adapted to cater to the unique requirements of different teams and projects.&lt;/li&gt;
&lt;li&gt;Cross-region collaboration: With businesses operating across multiple geographies, the need for a platform that supports cross-region collaboration is essential. IDPs empower development teams spread across the globe to work together seamlessly, enabling efficient knowledge sharing and fostering a culture of innovation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, the rising popularity of IDPs across companies of all sizes and regions worldwide can be attributed to their ability to streamline software development processes, simplify complex architectures, support DevOps and CI/CD practices, and facilitate cross-region collaboration. As the demand for digitalization and agility continues to grow, IDPs are poised to play a critical role in shaping the future of software development.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commercially available internal developer platforms
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://www.argonaut.dev/"&gt;Argonaut&lt;/a&gt; is a DevOps automation platform that helps engineering manage both the app and infra side of things and ship faster! Built with the GitOps best practices in mind, Argonaut reduces the complexities of creating and maintaining cloud setups. Whether it is Kubernetes app deployment to AWS or GCP, there are several runtimes, environments, regions, integrations, and app types to choose from.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://mia-platform.eu/"&gt;Mia Platform&lt;/a&gt; offers a simple way to develop and operate modern cloud applications on Kubernetes. It can be adopted as either a self-hosted or PaaS option. There’s also a &lt;a href="https://docs.mia-platform.eu/docs/marketplace/overview_marketplace"&gt;marketplace&lt;/a&gt; with essential plugins, templates, and applications making it easier to get started. Its features benefit not only the developers but also the platform engineers and CIOs.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://humanitec.com/"&gt;Humanitec&lt;/a&gt; is an internal developer platform providing simplicity, automation, reusability, and self-service. It acts as a Platform Orchestrator that lets engineering teams remove bottlenecks by enabling them to build code-based golden paths (executable config files and templates) for developers. It can be used through the CLI or UI.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.opslevel.com/"&gt;Opslevel&lt;/a&gt; provides engineering teams self-serve access to tools and information. It helps developers ensure operational excellence across services with its &lt;a href="https://www.opslevel.com/integrations"&gt;integrations&lt;/a&gt; that can be set up in a secure and compliant way.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://shipa.io/"&gt;Shipa&lt;/a&gt; is a Kubernetes application management platform that enables efficient deployment processes. Developers can leverage its standardized application and policy definitions that are platform-agnostic. It also has a GUI-based portal to manage apps after deployment and gain visibility into pipelines for smoother operations.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.getport.io/"&gt;Port&lt;/a&gt; provides a context-rich software catalog with maturity and quality scorecards. It also supports comprehensive developer self-service actions while providing additional role-based access controls (RBAC). Their &lt;a href="https://www.getport.io/pricing"&gt;free-forever&lt;/a&gt; provides many key features and makes it a worthy contender.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.upbound.io/"&gt;Upbound&lt;/a&gt;, powered by Crossplane, offers an enterprise-grade control plane solution for multi-cloud and hybrid environments, enabling efficient management of cloud infrastructure. The &lt;code&gt;Up&lt;/code&gt; command line and &lt;a href="https://marketplace.upbound.io/"&gt;Upbound marketplace&lt;/a&gt; make it more effective and easier to get started.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.devopsbox.io/"&gt;DevOpsBox&lt;/a&gt; is an all-in-one DevOps platform that streamlines the application deployment process. It provides a comprehensive feature set in a modular manner that allows teams to focus fully on business functionality.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Evaluating Internal Developer Platforms
&lt;/h2&gt;

&lt;p&gt;Despite its myriad of benefits, Internal Developer Platforms do not make sense for all teams. They can end up being overkill for certain types of engineering teams and a burden to build and maintain for companies with smaller teams.&lt;/p&gt;

&lt;h3&gt;
  
  
  When they don’t make sense
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;You have existing processes that are efficient. Don’t complicate things too early. Companies that are currently using PaaS or other managed offerings should continue doing so as long as possible.&lt;/li&gt;
&lt;li&gt;Limited resources and team size. This could also mean your team is small, and most of your team is senior and comfortable writing scripts and managing infrastructure.&lt;/li&gt;
&lt;li&gt;There is low development complexity. If you have just one app with a simple single-cloud setup. And, if your app is monolithic and doesn’t make use of the microservice architecture, there is little benefit from creating an IDP.&lt;/li&gt;
&lt;li&gt;Incompatible organizational culture. If an organization’s culture is resistant to change or does not foster collaboration and communication, implementing an IDP might not be successful and could even lead to decreased efficiency and productivity.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When they make sense
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;You plan to start using Microservices. This usually also means a growing development team size or the complexity of projects you handle.&lt;/li&gt;
&lt;li&gt;You have a small team, and not everyone feels comfortable with deployments, scripting, and infrastructure, and you have not yet hired a dedicated DevOps.&lt;/li&gt;
&lt;li&gt;Dependencies on other colleagues are blocking your developers.&lt;/li&gt;
&lt;li&gt;The cost of your existing setup (such as PaaS) is too high. Which is inevitable once you start to scale to meet new requirements.&lt;/li&gt;
&lt;li&gt;You have plans to go multi-cloud, adopt more modern cloud services, and scale geographically.&lt;/li&gt;
&lt;li&gt;You want to increase standardization and consistency across your teams. An IDP can help reduce errors, improve code quality, and ensure all developers work with the same set of tools and follow the same best practices.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>productivity</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>cloud</category>
    </item>
    <item>
      <title>ArgoCD in Action: A Behind-the-Scenes Tour of Argonaut's GitOps Approach</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Tue, 11 Apr 2023 17:53:37 +0000</pubDate>
      <link>https://dev.to/argonaut/argocd-in-action-a-behind-the-scenes-tour-of-argonauts-gitops-approach-5hcg</link>
      <guid>https://dev.to/argonaut/argocd-in-action-a-behind-the-scenes-tour-of-argonauts-gitops-approach-5hcg</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is written in collaboration with &lt;a href="https://github.com/PrashantRaj18198"&gt;Prashant R&lt;/a&gt; who built the ArgoCD integration.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In our &lt;a href="https://www.argonaut.dev/blog/ci-pipelines-launch"&gt;recent announcement&lt;/a&gt; about adding the pipeline features to Argonaut, we mention using ArgoCD for our Continuous Delivery workflows. This article aims to shed more light on our implementation of ArgoCD by touching upon some of the major changes we’ve made. These changes might interest those curious about the backend changes that went in to this integration or if you’re looking to implement GitOps practices in your org using ArgoCD.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is ArgoCD?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/argoproj/argo-cd"&gt;ArgoCD&lt;/a&gt; is an open-source, declarative, and Kubernetes-native Continuous Delivery (CD) tool designed to automate and simplify application deployment and management within Kubernetes clusters. It follows the GitOps methodology, using Git repositories as the single source of truth for maintaining the desired state of applications and infrastructure. ArgoCD monitors the Git repository for changes, automatically synchronizing the declared state with the live environment, ensuring consistency, and enabling faster and more reliable application delivery.&lt;/p&gt;

&lt;p&gt;We choose to go with ArgoCD because it offers Argonaut greater flexibility to build features on top of our existing offerings. With its extensive feature set, including built-in support for Helm, Kustomize, and other configuration management tools, we found ArgoCD the right choice to implement our GitOps pipelines. In the coming months, we will be able to provide more functionality for our users to leverage based on these toolsets.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our implementation
&lt;/h2&gt;

&lt;p&gt;There are two major new components that we’ve introduced at Argonaut that enabled this shift to ArgoCD-based workflows. First, a new internal Git repo for every workspace that uses Argonaut. Second, a manager ArgoCD controller that runs in Argonaut’s environment. Before we get into the details, let’s touch upon what our setup looked like up until now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Previous backend setup
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;(Pre-migration on 30th March 2023)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In our previous setup, we had a single workflow that combined the build and deploys steps. Though this was acceptable for most use cases, there were several of our customers that had more complex requirements. Such as deploying an image that was already built, building once and deploying to multiple environments, and so on. There was also an issue regarding custom apps such as Datadog, which do not require a build step. Currently, we’ve been handling that using the Add-ons flow and differentiating them from your Git apps (Git-apps).&lt;/p&gt;

&lt;p&gt;These workflows were also triggered from Argonaut each time using Helm. This meant there was more involvement of Argonaut’s backend to complete the tasks and ensure that all your deployments are being carried on as expected. Our backend had to be &lt;code&gt;stateful&lt;/code&gt; for the most part and directly involved in keeping up with user deployments.&lt;/p&gt;

&lt;p&gt;The Argonaut Bot on your Git repo had to install several workflow files on your Git repo to make it function. Small manual changes to these files could cause issues, and we even had to develop a three-way merge feature to fix such issues. This was a messy implementation, and many of our customers also requested a better way to do this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--egwhP66X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnpmy4o7ijjiuntjte9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--egwhP66X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnpmy4o7ijjiuntjte9q.png" alt="Pre-migration and post-migration backend setup" width="800" height="641"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  New backend setup
&lt;/h3&gt;

&lt;p&gt;The first major change we made was separating the build and deployment into two independent steps. This involved splitting the releases into the build and deploy steps, separating the build time and run time secrets while also ensuring they work well together without affecting the performance in any way.&lt;/p&gt;

&lt;p&gt;Secondly, we now have an internal Argonaut GitHub with a unique repository for each workspace our user creates. This repo for your org will house all the config files and Helm &lt;code&gt;values.yaml&lt;/code&gt; files. The ArgoCD instances for your org will watch for changes to this Git repo and automatically apply the changes when they notice them. By doing so, we can implement GitOps-powered deployment flows for your applications.&lt;/p&gt;

&lt;p&gt;Third, we’ve reduced the number of config files we store in your repo. Each repo connected to an Argonaut org will now have just one workflow file. This file will be triggered in multiple ways based on the action you take from Argonaut’s UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D6kdKOmt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3imd18weaw52jadehyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D6kdKOmt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3imd18weaw52jadehyq.png" alt="Build step" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-Phvizf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uy2up04psd9xggy0aq1q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y-Phvizf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uy2up04psd9xggy0aq1q.png" alt="Deploy step" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key decisions during migration
&lt;/h2&gt;

&lt;p&gt;We’ve had to make several key decisions during the migration process and ensure a consistent user experience while also maintaining cost-effectiveness and planning for future scale from our end. Here we highlight a few key decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Spinning up ArgoCD instances
&lt;/h3&gt;

&lt;p&gt;All kinds of users visit and try out Argonaut. Some use it for their hobby projects, some power users use it for their work, and some are just exploring the product. We wanted to continue offering Argonaut as a free-to-use solution for all.&lt;/p&gt;

&lt;p&gt;With the new ArgoCD instances approach, the initial costs would go up on Argonaut’s end as we spin up new ArgoCD instances for each user/org/environment. It was necessary to set up a milestone in a user’s journey where deploying an ArgoCD cluster for their app was viable. i.e., that they would actually use it, and we were confident that they would use Argonaut for their deployment.&lt;/p&gt;

&lt;p&gt;We decided to spin up a new ArgoCD instance whenever users create their first k8s cluster in Argonaut. This would trigger a &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/#helm"&gt;Helm install&lt;/a&gt; that would set up the ArgoCD instance in your own cluster.&lt;/p&gt;

&lt;p&gt;Since every workspace has just one ArgoCD instance and one repo in Argonaut’s internal GitHub, creating the first Kubernetes cluster as a threshold point would be sufficient for all their future workflows and apps managed using Argonaut.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Note: Though the ArgoCD instance is only installed on first cluster creation, the internal Git repo creation happens when a new org is created. The GitHub repo is essential to store all configs pertaining to the org.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Connecting ArgoCD and user clusters
&lt;/h3&gt;

&lt;p&gt;Now that we have an ArgoCD manager app running on Argonaut’s cluster and an instance running in each of our users’ workspace, the next step is to ensure the right ArgoCD instances are watching the right repos and updating the changes as they come.&lt;/p&gt;

&lt;p&gt;In our setup, the manager App is the one that runs in Argonaut’s cluster, and the rest of the apps are deployed directly in our user’s cluster during their first Kubernetes cluster creation.&lt;/p&gt;

&lt;p&gt;The task of the Manager app is to make sure that all the child apps are in sync with the Git Repo. For example, a user wants to deploy the Battleships app. The user creates a new Kubernetes cluster, then Argonaut automatically creates a connection to the new cluster using ArgoCD. The Manager App on Argonaut’s cluster now spins up a child app (one for the Org) on the users’ cluster. And whenever there is a change to config in the Git repo, the child app watches it and ensures the cluster configs are in sync with the repo of that org.&lt;/p&gt;

&lt;p&gt;The other issue we had to deal with was that the ArgoCD Manager runs on Argonaut’s cluster, but we need to deploy the apps with specified configs on the user’s selected cluster. To do this, we took the&lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/security/#external-cluster-credentials"&gt; external cluster credentials&lt;/a&gt; approach.&lt;/p&gt;

&lt;p&gt;Now, the cost for the ArgoCD controller running on our cluster and related components is all managed by Argonaut. This should show improved performance to our users at no additional cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring the deployment
&lt;/h3&gt;

&lt;p&gt;For every app you deploy, Argonaut’s backend creates the &lt;code&gt;values.yaml&lt;/code&gt; file, and commits it to the internal repo corresponding to your workspace. The ArgoCD instances on your clusters automatically pick up on these changes and deploy them to your cluster.&lt;/p&gt;

&lt;p&gt;At this point, Argonaut doesn’t know the state of the deployment, as it is managed by ArgoCD. To get more visibility into this, we use ArgoCD’s Webhooks. As soon as the configuration is saved in our internal repo for your workspace, we listen to the callback from ArgoCD. This then continuously updates the status of your deployment in Argonaut UI. Everything happens in an &lt;code&gt;async&lt;/code&gt; way. Each ArgoCD app has the same name as the Deployment ID, and we use these Deployment IDs to map the state of deployment and update it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing secrets
&lt;/h2&gt;

&lt;p&gt;Secrets are managed outside of the ArgoCD instance and Git. We use an internal secret manager for this, with plans to extend this implementation and integrate with other secret managers like Doppler and Hashicorp Vault in the future.&lt;/p&gt;

&lt;p&gt;The container registry secrets are generated and stored outside of this as well, and are upserted as needed into the kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We’ve been working on this migration for a while now, and we’re excited to share it with you. This enables us to be truly GitOps-powered and also gives us the flexibility to serve scaling teams and infrastructure for our users as we grow.&lt;/p&gt;

&lt;p&gt;Specifically, this new setup enables us to add functionality like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Deploying to multiple clouds, environments, and clusters&lt;/li&gt;
&lt;li&gt;Cloning apps and environments&lt;/li&gt;
&lt;li&gt;Managing secrets&lt;/li&gt;
&lt;li&gt;Managing custom build and deploy pipelines&lt;/li&gt;
&lt;li&gt;Preview environments on demand&lt;/li&gt;
&lt;li&gt;And more!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ve also been able to improve the user experience by making it easier to manage your deployments and secrets.&lt;/p&gt;

&lt;p&gt;We’re excited to see what you build with Argonaut. If you have any questions or feedback, please reach out to us on &lt;a href="https://twitter.com/Argonaut_Dev"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>gitops</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Redpanda: Quickstart to selfhost Redpanda</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Tue, 28 Mar 2023 11:00:40 +0000</pubDate>
      <link>https://dev.to/argonaut/redpanda-quickstart-to-selfhost-redpanda-50ph</link>
      <guid>https://dev.to/argonaut/redpanda-quickstart-to-selfhost-redpanda-50ph</guid>
      <description>&lt;p&gt;⏰ &lt;em&gt;Estimated Time: 4 minutes&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This guide covers the steps to self host Redpanda on your Kubernetes cluster using Argonaut.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://redpanda.com/"&gt;Redpanda&lt;/a&gt; is a distributed event streaming platform providing infrastructure for real-time data. It is designed to be fast, scalable, and easy to use. It is built on top of Apache Kafka and is fully compatible with the Kafka ecosystem, while being packaged as a single binary without the need for ZooKeeper or a separate control plane.&lt;/p&gt;

&lt;p&gt;This makes for a lightweight and easily maintainable deployment. It provides a number of features that make it well-suited for modern streaming applications, including support for transactions, and a high-performance storage engine.&lt;/p&gt;

&lt;p&gt;We will deploy both Redpanda and the Redpanda console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites for installation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://ship.argonaut.dev/"&gt;Argonaut&lt;/a&gt; account with an environment created&lt;/li&gt;
&lt;li&gt;AWS or GCP account connected to Argonaut&lt;/li&gt;
&lt;li&gt;EKS or GKE cluster provisioned by Argonaut or any other imported Kubernetes cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; This installation requires Kubernetes 1.21+. Argonaut automatically maintains Kubernetes versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Library App in Argonaut
&lt;/h2&gt;

&lt;p&gt;You can add Redpanda as a custom app from Library to Argonaut. This will then be available in your cluster under &lt;strong&gt;Add-Ons&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 Each Redpanda broker runs on its own worker node and requires a minimum of 3 nodes. This can be changed in Helm chart configuration's &lt;code&gt;podAntiAffinity&lt;/code&gt; rules by setting &lt;code&gt;podAntiAffinity.type: soft&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Click on the &lt;code&gt;Application +&lt;/code&gt; and the &lt;code&gt;From Library&lt;/code&gt; button.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--izS-KjJM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ju8txskd1xqka0tfcr44.png" alt="Select Library Application" width="391" height="246"&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;Custom-Apps&lt;/code&gt; under configuration.&lt;/li&gt;
&lt;li&gt;Ensure the selected environment is correct, then select the cluster you want to deploy the agent to. Important to ensure that the environment and region of this node is the same as all other nodes.&lt;/li&gt;
&lt;li&gt;Set the configuration to Custom Apps (Helm)

&lt;ol&gt;
&lt;li&gt;Set namespace as the same name as the cluster (that's where your apps are deployed by default)&lt;/li&gt;
&lt;li&gt;Release Name as &lt;code&gt;redpanda&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Chart Name as &lt;code&gt;redpanda&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Repo Url as &lt;code&gt;https://charts.redpanda.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Helm version will be updated automatically.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P0boN_F---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/beu9d3dx9ie0n7xq66r5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P0boN_F---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/beu9d3dx9ie0n7xq66r5.png" alt="Deployment information" width="880" height="352"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Make the following changes to &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Logging&lt;/span&gt;
  &lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Log level&lt;/span&gt;
    &lt;span class="c1"&gt;# Valid values (from least to most logging) are warn, info, debug, trace&lt;/span&gt;
    &lt;span class="na"&gt;logLevel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Multiple &lt;code&gt;redpanda&lt;/code&gt; instances running in the same cluster will need manual port tweaks to avoid conflicts. This is supported by the chart.&lt;/li&gt;
&lt;li&gt;Secrets must be used for passwords in production environments.&lt;/li&gt;
&lt;li&gt;Optionally, enable &lt;a href="https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer"&gt;SASL authentication&lt;/a&gt; by editing this in the &lt;code&gt;values.yaml&lt;/code&gt; file.
&lt;/li&gt;
&lt;/ol&gt;

&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="c1"&gt;# Authentication&lt;/span&gt;
  &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sasl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superuser&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;secretpassword&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;code&gt;Install&lt;/code&gt; to deploy the app.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Accessing Redpanda
&lt;/h2&gt;

&lt;p&gt;The Kafka interface is exposed on port 9094 by default (&lt;code&gt;listeners.kafka.external.default.port&lt;/code&gt;). The Redpanda console is exposed on port 9644 within the cluster (&lt;code&gt;listeners.admin.port&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Components installed
&lt;/h3&gt;

&lt;p&gt;The Redpanda Helm chart will install the following components into your Kubernetes cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.redpanda.com/docs/deploy/deployment-option/self-hosted/kubernetes/eks-guide/#statefulset"&gt;A StatefulSet&lt;/a&gt; with three Pods.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.redpanda.com/docs/deploy/deployment-option/self-hosted/kubernetes/eks-guide/#persistentvolumeclaims"&gt;One PersistentVolumeClaim&lt;/a&gt; for each Pod, each with a capacity of 20Gi.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.redpanda.com/docs/deploy/deployment-option/self-hosted/kubernetes/eks-guide/#service"&gt;A headless ClusterIP Service and a NodePort Service&lt;/a&gt; for each Kubernetes node that runs a Redpanda broker.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kafka</category>
      <category>kubernetes</category>
      <category>redpanda</category>
      <category>data</category>
    </item>
    <item>
      <title>Behind The Scenes: Integrating Infracost for Cost Visibility</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Thu, 02 Mar 2023 09:32:02 +0000</pubDate>
      <link>https://dev.to/argonaut/behind-the-scenes-integrating-infracost-for-cost-visibility-14a5</link>
      <guid>https://dev.to/argonaut/behind-the-scenes-integrating-infracost-for-cost-visibility-14a5</guid>
      <description>&lt;p&gt;&lt;em&gt;This article is written with massive help from &lt;a href="https://prajjwal.me/" rel="noopener noreferrer"&gt;Prajjwal Dimri&lt;/a&gt; who built the infracost integration.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Argonaut enables app deployment and management in Kubernetes on AWS and GCP. Moreover, cloud infra like RDS, s3, CloudSQL, GKE, EKS, etc. can also be provisioned and managed alongside your apps in one place -- across all environments. We aim to provide more visibility into your cloud setup. This article goes through Argonaut’s integration with Infracost for giving users instant cost estimates for their infra resources.&lt;/p&gt;

&lt;p&gt;To achieve more visibility into cloud resources, one of the main challenges is getting a sense of cloud costs. &lt;a href="https://finops.org/" rel="noopener noreferrer"&gt;FinOps&lt;/a&gt; is a fast-growing cross-functional discipline covering tools and best practices to improve cost spending on the cloud. Argonaut also provides its users with real-time cloud cost estimates; this is done with reliable and updated resource pricing from Infracost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://infracost.io" rel="noopener noreferrer"&gt;Infracost.io&lt;/a&gt; is one of the leading open-source projects for cloud cost, with over 3 million cloud resources pricing across AWS, GCP, and Azure. It can be used by invoking the &lt;a href="https://www.infracost.io/docs/cloud_pricing_api/overview/" rel="noopener noreferrer"&gt;Cloud pricing API&lt;/a&gt;, &lt;a href="https://www.infracost.io/docs/features/cli_commands/" rel="noopener noreferrer"&gt;CLI commands&lt;/a&gt;, or through one of &lt;a href="https://www.infracost.io/docs/integrations/github_app/" rel="noopener noreferrer"&gt;its integrations&lt;/a&gt;. It can also be self-hosted and used with the &lt;a href="https://www.infracost.io/pricing/" rel="noopener noreferrer"&gt;Infracost Cloud&lt;/a&gt; subscription.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.argonaut.dev/blog/july-2022-release-notes" rel="noopener noreferrer"&gt;July 2022&lt;/a&gt;, Argonaut launched the cost estimate feature by adopting Infracost to fetch the resource pricing of infra components shipped using Argonaut. We show infra cost for most of the infra components. Few resources having usage-based pricing like S3 are not supported.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost visibility in Argonaut
&lt;/h2&gt;

&lt;p&gt;Users can see the cost of infrastructure resources in two places. First, pre-creation, while you create/update a new resource. Second, post-creation, for existing resources with generated terraform files.&lt;/p&gt;

&lt;p&gt;Price pre-creation estimates shown before creating an object can be seen below. (Infra create page). This is a quick estimate based on the values you have entered by you. This is not as accurate as the post-creation cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimjf13w3shxdqdeclcme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimjf13w3shxdqdeclcme.png" alt="Infra create page quick cost estimation" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post-creation cost can be seen below. (Infra list page). This version is more accurate as it uses the terraform config files to estimate the cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7w3vxztmq6mo7idz5ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7w3vxztmq6mo7idz5ci.png" alt="Infra List page final cost estimation" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation
&lt;/h2&gt;

&lt;p&gt;To achieve this, we built out a microservice, creatively named &lt;code&gt;costly&lt;/code&gt;, that interacts with Argonaut's backend and the Infracost API to provide us with cost estimates. We have two different workflows that use Infracost and &lt;code&gt;costly&lt;/code&gt; in different ways.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: We strongly profess using a monolith architecture for our backend. We have a single backend that handles all the requests from the UI. This is a conscious decision to keep the backend simple and easy to maintain.&lt;/p&gt;

&lt;p&gt;We chose to build a separate microservice to interact with Infracost because we use REST APIs internally and Infracost is a GraphQL API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;costly&lt;/code&gt; microservice
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;costly&lt;/code&gt; is a microservice created by our team to facilitate easy cost visibility at various stages of infra operation. &lt;code&gt;costly&lt;/code&gt;’s job is to aggregate the pricing. (Base price + Storage + compute cost). It also understands the resource type and calculates the total cost by including other factors like region, number of node groups, and attributes based on the service type (e.g., purchase options for Kubernetes clusters).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;costly&lt;/code&gt; microservice processes the logic to separate fixed and variable costs based on the resource type. This cost is returned to the user in the Argonaut UI. This cost estimate is a rough estimate based on the values you have entered by you. By hovering over the &lt;code&gt;?&lt;/code&gt; you can see the breakdown between Fixed and Variable costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh8m8jktetq1k5iagrc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flh8m8jktetq1k5iagrc6.png" alt="Costly microservice, cost breakdown - Fixed and Variable costs" width="764" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is an example of a graphQL query we run in the &lt;code&gt;costly&lt;/code&gt; microservice. This shows the processing of requests for an AWS DocumentDB resource. The &lt;a href="https://www.infracost.io/docs/cloud_pricing_api/overview/" rel="noopener noreferrer"&gt;Product properties&lt;/a&gt; are shared by the user, and the return value collected is the price in USD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="k"&gt;query&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$instanceType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;!)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;products&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;vendorName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aws"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$region&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AmazonDocDB"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;attributeFilters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"instanceType"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nv"&gt;$instanceType&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;prices&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;USD&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cost estimation workflows
&lt;/h3&gt;

&lt;p&gt;Argonaut uses two workflows to get you the costs of resources. They are described below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick cost estimates (pre-creation)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67kto69j58hwlcjh6llj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67kto69j58hwlcjh6llj.png" alt="Pre-creation cost estimate" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is shown to the user when creating a new infra resource on Argonaut. They appear in the step where you create new infra resources or update an existing infra resource. Perform the following actions to get a quick cost estimate.&lt;br&gt;
&lt;code&gt;Infra&lt;/code&gt; &amp;gt; &lt;code&gt;Resource +&lt;/code&gt; &amp;gt; Add resource details &amp;gt; See cost estimate&lt;/p&gt;

&lt;p&gt;We get the above estimate by using Infracost’s Cloud Pricing API.&lt;/p&gt;

&lt;p&gt;This process happens in this sequence.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A request is sent from the UI to the Argonaut backend containing the infra resource parameters. Parameters such as

&lt;ol&gt;
&lt;li&gt;Vendor&lt;/li&gt;
&lt;li&gt;Service&lt;/li&gt;
&lt;li&gt;Product Family&lt;/li&gt;
&lt;li&gt;Region&lt;/li&gt;
&lt;li&gt;Attributes&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;This request is forwarded to the &lt;code&gt;costly&lt;/code&gt; microservice.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;costly&lt;/code&gt; microservice processes the logic to separate fixed and variable costs based on the resource type.&lt;/li&gt;

&lt;li&gt;The Infracost cloud pricing API uses this information and returns the correct cost values based on the pricing charts available for the requested resource&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Infra provisioning along with cost estimate (post-creation)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod8f0cd3l8n1rs15h39v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fod8f0cd3l8n1rs15h39v.png" alt="Post-creation cost estimate" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Actual costs refer to the cost of resources shown once they are created and active. These costs are usually more accurate than the initial quick estimates provided. They have a slightly different flow, as in this step, we use Terraform config files to estimate your infra resources costs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;costly&lt;/code&gt; has an additional step where the returned cost is stored securely. A time-series clickhouse database is used here, and each entry is mapped to your account and resource ID. This will be part of upcoming features that provide more ways to save costs.&lt;br&gt;
This process happens in this sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User specifies resource attributes and clicks &lt;code&gt;create resource&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The backend receives the request and provisions the necessary Terraform (TF) files and resources.&lt;/li&gt;
&lt;li&gt;Then, initiates the cost calculations using Infracost for all resources in the TF file.&lt;/li&gt;
&lt;li&gt;Costs are forwarded to &lt;code&gt;costly&lt;/code&gt; along with the &lt;code&gt;resource_id&lt;/code&gt;, which is then stored securely by &lt;code&gt;costly&lt;/code&gt;. &lt;code&gt;costly&lt;/code&gt; also calculates total costs.&lt;/li&gt;
&lt;li&gt;This cost is then displayed to the user.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: There are two additional scenarios where the post-creation workflow is triggered. These don’t require any user interaction and are automatically handled by Argonaut.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A &lt;code&gt;weekly cron job&lt;/code&gt; updates all infra costs based on the latest Terraform files&lt;/li&gt;
&lt;li&gt;A safeguard method that can initiate the cost estimation process if no value is found for the &lt;code&gt;resource_id&lt;/code&gt; in the storage&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Benefits to Argonaut users
&lt;/h2&gt;

&lt;p&gt;Argonaut provides several benefits by integrating infra cost visibility right into the resource CRUD workflows.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It saves the user time and effort compared to visiting other tools such as AWS cost calculator to obtain the relevant costs.&lt;/li&gt;
&lt;li&gt;The decision-making process is faster. By viewing the costs in-line, you can decide whether the resource fits within your budget for that service.&lt;/li&gt;
&lt;li&gt;The process is automated and based on TF modules (GitOps single source of truth principles), leaving less room for human errors than cost estimates using third-party software.&lt;/li&gt;
&lt;li&gt;Argonaut updates its infra and resource costs weekly with a custom cost update service,&lt;/li&gt;
&lt;li&gt;Make cost estimates as part of your infra workflow, achieve more savings, and improve your cloud resource utilization.&lt;/li&gt;
&lt;li&gt;Supported resources - Argonaut currently provides cost information for these AWS and GCP resources, respectively

&lt;ol&gt;
&lt;li&gt;AWS

&lt;ol&gt;
&lt;li&gt;Aurora&lt;/li&gt;
&lt;li&gt;DocumentDB&lt;/li&gt;
&lt;li&gt;EKS&lt;/li&gt;
&lt;li&gt;OpenSearch (Elasticsearch)&lt;/li&gt;
&lt;li&gt;Elasticache&lt;/li&gt;
&lt;li&gt;MSK&lt;/li&gt;
&lt;li&gt;RDS&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;GCP

&lt;ol&gt;
&lt;li&gt;CloudSQL&lt;/li&gt;
&lt;li&gt;CloudComposer v2&lt;/li&gt;
&lt;li&gt;GKE&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;/li&gt;

&lt;/ol&gt;

&lt;h2&gt;
  
  
  Challenges faced during implementation
&lt;/h2&gt;

&lt;p&gt;Here are a few challenges we faced and things we learned integrating &lt;code&gt;costly&lt;/code&gt; and infracost. If you’re looking to add infracost to your workflows, do keep these in mind.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Figuring out queries for cloud pricing APIs can be tricky.&lt;/li&gt;
&lt;li&gt;The documentation for some of the &lt;a href="https://www.infracost.io/blog/cloud-pricing-api/#concepts" rel="noopener noreferrer"&gt;attributes and values&lt;/a&gt; is work in progress.&lt;/li&gt;
&lt;li&gt;For Google’s CloudSQL queries, the API returned text-based responses having various attributes. Regex matching was challenging and required additional custom functions to parse the responses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, despite these small hurdles, the infracost solution was easy to use, and their active community helped us resolve the &lt;a href="https://github.com/infracost/infracost/issues/1728" rel="noopener noreferrer"&gt;issue&lt;/a&gt; we had raised on GitHub in a timely manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Having visibility into your cloud costs is essential as you set up your cloud resources. Argonaut’s current cost visibility setup with Infracost is just the first step towards providing you with steps to save on cloud costs. Lots more features, such as historical cost insights, tips for cost savings, are planned for the next few months.&lt;/p&gt;

&lt;p&gt;Do note that there are other &lt;a href="https://www.argonaut.dev/blog/hidden-cloud-costs" rel="noopener noreferrer"&gt;hidden costs&lt;/a&gt; associated with cloud operations. More Kubernetes cost optimization strategies can be &lt;a href="https://www.argonaut.dev/blog/k8s-cost-optimization-strategies" rel="noopener noreferrer"&gt;found here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Huge shoutout to the &lt;a href="https://www.infracost.io/" rel="noopener noreferrer"&gt;Infracost&lt;/a&gt; team for building an amazing &lt;a href="https://github.com/infracost/infracost" rel="noopener noreferrer"&gt;open-source tool&lt;/a&gt; to help us keep track of cloud costs across thousands of different services, products, regions, and configurations.&lt;/p&gt;

</description>
      <category>introduction</category>
      <category>showdev</category>
      <category>portfolio</category>
      <category>github</category>
    </item>
    <item>
      <title>GitOps Tools: Popular CD and IaC Tools That Enable GitOps Processes</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Thu, 02 Mar 2023 09:23:54 +0000</pubDate>
      <link>https://dev.to/argonaut/gitops-tools-popular-cd-and-iac-tools-that-enable-gitops-processes-293h</link>
      <guid>https://dev.to/argonaut/gitops-tools-popular-cd-and-iac-tools-that-enable-gitops-processes-293h</guid>
      <description>&lt;p&gt;GitOps tools have transformed how teams develop and deploy applications, enabling teams to ship features faster and with greater confidence. Continuing our &lt;a href="https://www.argonaut.dev/blog/tags/gitops" rel="noopener noreferrer"&gt;GitOps series&lt;/a&gt;, this piece focuses on the popular tools that help you establish an efficient and reliable GitOps deployment pipeline.&lt;/p&gt;

&lt;p&gt;GitOps is an approach where the user can declaratively specify the desired state for both apps and infra in a Git repo. This approach to software development and operations enables teams to collaborate and manage their applications and infrastructure using Git.&lt;/p&gt;

&lt;p&gt;By utilizing source control and automation, GitOps automates the entire deployment experience, from code to cloud, providing teams with the ability to manage their applications, infrastructure, and configuration from one centralized location. This has resulted in an improved deployment experience, with more reliable, secure, and faster deployments.&lt;/p&gt;

&lt;p&gt;Check out our &lt;a href="https://dev.to/blog/gitops-primer"&gt;GitOps Primer article&lt;/a&gt; for more information on the basics. This article discusses two essential categories of tools that make GitOps possible - CD and IaC tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  CD tools for GitOps
&lt;/h2&gt;

&lt;p&gt;These tools enable the main CD pipeline that makes GitOps possible. We are talking about tools that ensure automated deployments in a secure manner and support GitOps Capabilities. These tools also have Kubernetes-native support out of the box.&lt;/p&gt;

&lt;p&gt;We will have a future article that provides an in-depth comparison of ArgoCD and FluxCD.&lt;/p&gt;

&lt;h3&gt;
  
  
  ArgoCD
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; is a popular open-source solution that’s part of the &lt;a href="https://argoproj.github.io/" rel="noopener noreferrer"&gt;Argo project&lt;/a&gt;. It is a declarative continuous delivery tool for Kubernetes, with a fully-loaded UI and a CLI tool. It stores all configuration logic in Git to allow developers to utilize the code development, review, and approval workflow already connected to Git-based repositories.&lt;/p&gt;

&lt;p&gt;Some of its popular features are automating application deployment to target environments, supporting multiple configuration management and templating tools, and enforcing strong authorization with RBAC and multi-tenancy policies. It also offers real-time visibility into application activity, webhook integration, rollouts of complex deployments, access tokens, audit trails, and parameter overrides.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flux
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://fluxcd.io/" rel="noopener noreferrer"&gt;Flux&lt;/a&gt; is a powerful GitOps tool that helps you manage deployments, resources, and integrations with various Git providers and provides multi-tenancy support. It uses a cluster operator to start deployments in Kubernetes, so there's no need for another CD tool.&lt;/p&gt;

&lt;p&gt;It does not require any CI access to your Kubernetes clusters, and it provides atomic and transactional changes with an audit log stored in Git for your convenience and security. Additionally, Flux provides an easy-to-use UI that allows you to track and manage the changes you make to your clusters. Its robustness and scalability make it a great choice for businesses of any size.&lt;/p&gt;

&lt;p&gt;Several tools like D2iQ Kommander, Giant Swarm Kubernetes Platform, AWS EKS Anywhere are built on top of Flux.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitLab CI/CD
&lt;/h3&gt;

&lt;p&gt;GitLab CI/CD is a powerful, enterprise-grade CI/CD platform that's designed to help teams create and manage their GitOps pipelines. It provides a number of unique features that make it well-suited for GitOps, such as multi-project pipelines, custom workflows, and the ability to integrate with multiple CI/CD tools. GitLab has also &lt;a href="https://about.gitlab.com/blog/2023/02/08/why-did-we-choose-to-integrate-fluxcd-with-gitlab/" rel="noopener noreferrer"&gt;announced its integration&lt;/a&gt; of Flux CD.&lt;/p&gt;

&lt;p&gt;Additionally, it offers a built-in Kubernetes integration, so teams can leverage the power of Kubernetes to deploy applications and can easily track any changes made to their applications and infrastructure in the CI/CD pipeline. GitLab CI/CD also offers built-in security features, such as authentication and authorization, secrets management, and secure credential storage, that help teams ensure their applications and infrastructure are secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Werf
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://werf.io/" rel="noopener noreferrer"&gt;Werf&lt;/a&gt; is an open-source, GitOps-based CI/CD solution for Kubernetes developed by &lt;a href="https://flant.com/" rel="noopener noreferrer"&gt;Flant&lt;/a&gt;. It enables teams to quickly and easily deploy their applications and infrastructure with GitOps and Kubernetes. Werf’s features, such as integrated container image building, automated application deployment, and support for multiple cloud providers, make it suitable for GitOps workflows.&lt;/p&gt;

&lt;p&gt;Werf is designed to be easy to use and cost-effective, allowing teams to get up and running quickly and without any additional cost. It also offers a number of integrations with popular tools, such as Helm, Kubernetes, and Docker, as well as the ability to connect to multiple Git providers. Additionally, Werf offers support for advanced GitOps features, such as automated rollbacks, secure deployments, and multi-cluster synchronization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Weave GitOps core
&lt;/h3&gt;

&lt;p&gt;Weave GitOps Core is an extension to Flux. It provides insights into your application deployments and makes CD with GitOps easier to scale and adopt across your teams. It provides a dashboard with applications, sources, and a Flux Runtime view.&lt;/p&gt;

&lt;p&gt;Weave GitOps Core is an open-source CD tool for Kubernetes and cloud-native applications that uses Git-based CD, is Kubernetes-native, has declarative automation, and includes integrations with various tools. Weaveworks also provides Weave GitOps Enterprise, a commercial solution based on the open-source Weave GitOps Core, and is also available via the AWS Marketplace.&lt;/p&gt;

&lt;h2&gt;
  
  
  IaC tools for GitOps
&lt;/h2&gt;

&lt;p&gt;Infrastructure as Code (IaC) tools are essential for creating and managing the infrastructure that powers applications and services. The two main advantages of using IaC are that it helps automate the process of configuring and deploying cloud infrastructure and is stored as code, making it easier to version control, track changes, and collaborate on deployments.&lt;/p&gt;

&lt;p&gt;In the context of GitOps, IaC tools help teams automate the process of configuring and deploying their applications and infrastructure, ensuring that the same version of the application is deployed in production as in development. This helps teams test the configuration of their applications and infrastructure in a reliable and repeatable manner.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An important thing to note is that IaC tools are not GitOps tools.&lt;/strong&gt;&lt;br&gt;
They are used to create and manage the infrastructure that powers applications and services. However, they can be used in conjunction with GitOps tools to automate the process of configuring and deploying applications and infrastructure. IaC tools in conjunction with a GitOps tool can help teams automate the process of configuring and deploying their applications and infrastructure, ensuring that the same version of the application is deployed in production as in development. This helps teams test the configuration of their applications and infrastructure in a reliable and repeatable manner. Notably, IaC tools do not account for drift by default. However, there are tools that can help you detect and fix drift, with continuous reconciliation. Crossplane has such features built in.&lt;/p&gt;

&lt;p&gt;Here are some of the most popular open-source tools. Check out our &lt;a href="https://www.argonaut.dev/blog/infrastructure-as-code-tools" rel="noopener noreferrer"&gt;IaC tools blog&lt;/a&gt; for more tools. And here’s an &lt;a href="https://www.argonaut.dev/blog/comprehensive-iac-comparison" rel="noopener noreferrer"&gt;in-depth comparison&lt;/a&gt; of CloudFormation, Terraform, and Pulumi.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform by Hashicorp is an Infrastructure as Code (IaC) tool that is cloud platform-agnostic and allows users to define cloud and on-prem resources in a human-readable format. It also has plugins that interact directly with cloud providers and SaaS providers, and a registry of publicly available modules. Terraform also enables initializing infrastructure from scratch, defining network resources, and integrating with VCS like GitHub.&lt;/p&gt;

&lt;p&gt;It offers features such as an intuitive UI, an extensive library of pre-defined modules, and the ability to define multiple environments for development, staging, and production. Additionally, Terraform supports a wide variety of cloud providers, making it one of the most popular IaC tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pulumi
&lt;/h3&gt;

&lt;p&gt;Pulumi’s Cloud Engineering Platform assists in the Build, Deploy and Manage your infrastructure. Building your IaC using Pulumi allows you to use the languages you love, such as TypeScript, JavaScript, Go, .NET, and YAML to model your cloud infrastructure. It also provides access to a full breadth of services from AWS, GCP, Azure, and over 60 other providers. Your cloud infra code can be reused in the form of &lt;strong&gt;&lt;a href="https://www.pulumi.com/product/packages/" rel="noopener noreferrer"&gt;Pulumi packages&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Founded in 2017 by Microsoft and Google veterans, Pulumi is a free, open-source infrastructure as code tool. The Pulumi Service offering makes managing infrastructure secure, reliable, and hassle-free. Pulumi service is available as both a self-host or SaaS option. You can easily deliver IaC through your existing CI/CD pipelines. There is also an option to enforce guardrails for security and compliance using policies in standard languages.&lt;/p&gt;

&lt;p&gt;You can view the roadmap and contribute to the Pulumi open-source project &lt;strong&gt;&lt;a href="https://github.com/orgs/pulumi/projects/44/views/1" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;. One of their main reasons for success has been their ability to support and help teams migrate to modern containerized, serverless, and Kubernetes workloads easily. Find a detailed comparison between Pulumi and Terraform &lt;strong&gt;&lt;a href="https://www.pulumi.com/docs/intro/vs/terraform/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Crossplane
&lt;/h3&gt;

&lt;p&gt;Crossplane is a framework for building cloud-native control planes without the need to write code. It is built specifically for Kubernetes with support for multi-cloud, serverless, and containerized workloads.&lt;/p&gt;

&lt;p&gt;Crossplane provides building blocks that enable you to provision, compose, and consume infrastructure with the Kubernetes API. These individual concepts work together to allow for a powerful separation so that each member of a team interacts with Crossplane at an appropriate level of abstraction.&lt;/p&gt;

&lt;p&gt;For enterprises looking for more stability, better support, and reduced risk, there is the &lt;strong&gt;&lt;a href="https://www.upbound.io/products/universal-crossplane" rel="noopener noreferrer"&gt;Universal Crossplane&lt;/a&gt;&lt;/strong&gt; (UXP) product by Upbound.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: Cloud Service Provider (CSP) tools for Infrastructure as Code (IaC) offer an intuitive experience and good integration with existing cloud services. However, they may be limiting if you need to run solutions across multiple clouds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  AWS Cloud Development Kit
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/cdk/" rel="noopener noreferrer"&gt;AWS Cloud Development Kit&lt;/a&gt; (CDK) is an open-source software development framework for defining cloud Infrastructure as Code (IaC) using modern programming languages such as Java, Python, .NET, and TypeScript. This allows developers to create and deploy their infrastructure and runtime code together.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;constructs&lt;/strong&gt;, which are cloud components used to generate AWS infrastructure. AWS CloudFormation powers the provisioning, and you get all the benefits of CloudFormation, including repeatable deployment, easy rollback, and drift detection.&lt;/p&gt;

&lt;p&gt;Some of its drawbacks and limitations are that AWS CDK does not currently have support for non-AWS cloud providers, so teams using it will be limited to AWS cloud services. AWS CDK is a relatively new tool, and it may still have some bugs and lack certain features that teams may need. You might need to use other IaC tools, such as Terraform, to supplement.&lt;/p&gt;

&lt;h3&gt;
  
  
  GCP Cloud Deployment Manager
&lt;/h3&gt;

&lt;p&gt;Google's &lt;a href="https://cloud.google.com/deployment-manager/docs" rel="noopener noreferrer"&gt;Cloud Deployment Manager&lt;/a&gt; is an infrastructure deployment service that automates the creation and management of Google Cloud resources. It provides users with the power to write flexible templates and configuration files, allowing them to create deployments that include various Google Cloud services, such as Cloud Storage, Compute Engine, and Cloud SQL.&lt;/p&gt;

&lt;p&gt;Through the use of these files, users can deploy the resources they need and configure them to work together seamlessly, providing them with an easy and efficient way to manage their cloud infrastructure. Additionally, Cloud Deployment Manager allows users to make changes and updates to their deployments quickly and easily, ensuring that their cloud resources remain up-to-date and in line with their desired configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure Resource Manager
&lt;/h3&gt;

&lt;p&gt;Azure Resource Manager is the deployment and management service for Azure. It acts as a management layer that authenticates and authorizes requests from your Azure APIs, CLI, or SDKs before forwarding them to the respective Azure service. Operations such as creating, updating, and deleting Azure resources are supported, and features like access control, locks, logs, and tags help you organize your resources after deployment.&lt;/p&gt;

&lt;p&gt;To implement Infrastructure as Code (IaC) for your Azure cloud resources, you use Azure Resource Manager (ARM) templates. These IaC templates are written in &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep" rel="noopener noreferrer"&gt;Bicep&lt;/a&gt;&lt;/strong&gt;, a domain-specific language with declarative syntax. The Bicep code is converted into JSON files during deployment by the Bicep CLI. ARM templates can also be written directly in JSON.&lt;/p&gt;

&lt;p&gt;Azure resources that share a common lifecycle are grouped into &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-to-resource-group" rel="noopener noreferrer"&gt;resource groups&lt;/a&gt;&lt;/strong&gt;. You can then deploy to the target resource group. All the available resources for Azure services can be found &lt;strong&gt;&lt;a href="https://learn.microsoft.com/en-us/azure/azure-resource-manager/management/azure-services-resource-providers" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Considerations before using GitOps
&lt;/h2&gt;

&lt;p&gt;To make the most out of GitOps, you will need a process and tooling shift. Here are some practical considerations before adopting GitOps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure that everyone in the organization is properly trained on the tools and processes used in the GitOps workflow (Git, CD tools, IaC, and Kubernetes).&lt;/li&gt;
&lt;li&gt;Establish a clear policy for committing and merging changes into the repository. Especially when it comes to testing and reviewing the changes before they are committed. Make sure the conflicts or issues are identified and resolved quickly.&lt;/li&gt;
&lt;li&gt;Maintain the security of the Git repository carefully with the right access controls, proper authentication and authorization, and how secrets or sensitive data are securely stored and managed.&lt;/li&gt;
&lt;li&gt;Proper monitoring and backups of the Git repository are essential. This helps to ensure that any changes can easily be tracked and reverted if necessary.&lt;/li&gt;
&lt;li&gt;Most often, choosing an open-source tool with an active community and support for multiple cloud vendors is the safer option. As it saves debugging time, is extensible, and can scale with your organization.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By taking the proper precautions and ensuring that everyone is properly trained and equipped with the right tools, organizations can reap the benefits of using GitOps and have a smooth and successful transition into the GitOps workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;GitOps is a powerful approach to software development and operations that helps teams collaborate and manage their applications and infrastructure using Git. It automates the deployment experience and helps teams achieve improved deployment experiences with more reliable, secure, and faster deployments.&lt;/p&gt;

&lt;p&gt;We have this article provided you with an overview of some of the most popular tools and has helped you understand the benefits of GitOps and the tools available to help you adopt it.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://argonaut.dev" rel="noopener noreferrer"&gt;Argonaut&lt;/a&gt; is a deployment automation tool that comes with all the benefits of IaC and CI/CD tools in-built. We follow GitOps best practices, provide a secret-management solution, and keep your configurations stored in your repo. Moreover, we have monitoring built-in for cluster and app-level metrics, along with providing access to multiple runtimes, environments, applications, and infra from a centralized dashboard.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>learning</category>
      <category>tutorial</category>
      <category>posts</category>
    </item>
    <item>
      <title>Setup External Secrets with Hashicorp Vault on AWS EKS</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Wed, 15 Feb 2023 11:23:47 +0000</pubDate>
      <link>https://dev.to/argonaut/setup-external-secrets-with-hashicorp-vault-on-aws-eks-o1d</link>
      <guid>https://dev.to/argonaut/setup-external-secrets-with-hashicorp-vault-on-aws-eks-o1d</guid>
      <description>&lt;p&gt;&lt;a href="https://external-secrets.io/v0.7.2/introduction/getting-started/"&gt;&lt;code&gt;external-secrets&lt;/code&gt;&lt;/a&gt; is one of the most efficient and secure ways manage Kubernetes Secrets. &lt;code&gt;external-secrets&lt;/code&gt; integrates with many external secret stores like Hashicorp Vault, AWS Secret Manager, etc. and can be used to manage secret files and variables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key components
&lt;/h3&gt;

&lt;p&gt;The main components of external secrets are as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;External Secrets Operator (ESO)&lt;/strong&gt; is a collection of custom API resources - &lt;strong&gt;&lt;code&gt;ExternalSecret&lt;/code&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;code&gt;SecretStore&lt;/code&gt;&lt;/strong&gt;, and &lt;strong&gt;&lt;code&gt;ClusterSecretStore&lt;/code&gt;&lt;/strong&gt; that provide a user-friendly abstraction for the external API that stores and manages the lifecycle of the secrets for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ExternalSecret -&lt;/strong&gt; It is a declaration of what data has to be fetched from your external secret manager. It references a &lt;strong&gt;&lt;code&gt;SecretStore&lt;/code&gt;&lt;/strong&gt;, which knows how to access the data. You can do more things like set refresh interval, specify a blueprint for the resulting &lt;strong&gt;&lt;code&gt;Kind=Secret&lt;/code&gt;&lt;/strong&gt;, use inline templates to construct the desired config file containing your secret, set a target that will be created, and create and delete policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SecretStore -&lt;/strong&gt; They are namespaced by design and cannot communicate with resources across namespaces. This is the file where you select your ESO controller, and cloud provider, along with the role and access IDs, and retry settings in case of connection failure.&lt;/p&gt;

&lt;p&gt;The secrets are fetched from an external vault and made available in SecretStores, these are limited to that particular namespace. For use in multiple namespaces, use the &lt;strong&gt;&lt;code&gt;ClusterSecretStore&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This guide will go through the process of setting up the External Secret Operator for your AWS EKS Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;To find out about other approaches to Kubernetes secrets management, check out &lt;a href="https://www.argonaut.dev/blog/secret-management-in-kubernetes"&gt;this blog&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;An existing Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Kubernetes v1.16.0 or later&lt;/li&gt;
&lt;li&gt;Connected Apps or databases that require secrets to access&lt;/li&gt;
&lt;li&gt;Hashicorp Vault as your external secret store&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl&lt;/code&gt; and &lt;code&gt;helm&lt;/code&gt; CLI installed&lt;/li&gt;
&lt;li&gt;Optional: an Argonaut account and (&lt;a href="https://www.argonaut.dev/docs/setup/art-cli"&gt;&lt;code&gt;art cli&lt;/code&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Section I of the installation can be done using both Helm (1.a.) or Argonaut’s UI (1.b.). Choose one of the below section.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.a. Install using Helm
&lt;/h3&gt;

&lt;p&gt;To set this up, you will need your &lt;strong&gt;kubeconfig&lt;/strong&gt; file. You can use a file created from your existing cluster or obtain this from Argonaut’s CLI.&lt;/p&gt;

&lt;p&gt;1: Run &lt;code&gt;art configure generate-aws-credentials&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2: Run &lt;code&gt;aws eks update-kubeconfig --name &amp;lt;clustername&amp;gt; --region &amp;lt;region&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once this is ready, run the following commands from your Terminal.&lt;/p&gt;

&lt;p&gt;For Mac and Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/home/argo/.kube/clustername-kubeconfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Windows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$env&lt;/span&gt;:KUBECONFIG &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\U&lt;/span&gt;&lt;span class="s2"&gt;sers&lt;/span&gt;&lt;span class="se"&gt;\a&lt;/span&gt;&lt;span class="s2"&gt;rgo&lt;/span&gt;&lt;span class="se"&gt;\.&lt;/span&gt;&lt;span class="s2"&gt;kube&lt;/span&gt;&lt;span class="se"&gt;\c&lt;/span&gt;&lt;span class="s2"&gt;lustername-kubeconfig.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Confirm your current context by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl config current-context
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see an output with the name of your cluster &lt;code&gt;argonaut-cluster&lt;/code&gt;. Once the context is set up, you can add &lt;code&gt;external-secrets&lt;/code&gt; repo to Helm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add external-secrets https://charts.external-secrets.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can run a helm update to make sure you’re on the latest version&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see this output: &lt;code&gt;"external-secrets" has been added to your repositories&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can now install the external-secrets repository&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    external-secrets/external-secrets &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-n&lt;/span&gt; tools &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="nv"&gt;installCRDs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see an install successful message with the name and namespace specified.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.b. Install with Argonaut
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Open Argonaut, and navigate to your desired AWS EKS cluster.&lt;/li&gt;
&lt;li&gt;Click on the &lt;code&gt;Application +&lt;/code&gt; and the &lt;code&gt;From Library&lt;/code&gt; option.&lt;/li&gt;
&lt;li&gt;Choose &lt;code&gt;Custom-Apps&lt;/code&gt; under configuration.&lt;/li&gt;
&lt;li&gt;Ensure the selected environment is correct, then select the cluster you want to deploy the agent to.&lt;/li&gt;
&lt;li&gt;Set the namespace you want to deploy the external secrets to be &lt;code&gt;tools&lt;/code&gt; (or where you plan to use the secrets).&lt;/li&gt;
&lt;li&gt;Set the release name of the application as &lt;code&gt;external-secrets&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Set the chart name as &lt;code&gt;external-secrets&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Set the chart repository as &lt;code&gt;https://charts.external-secrets.io&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Leave the chart version blank. It will be automatically populated to the latest version.&lt;/li&gt;
&lt;li&gt;Load the values.yaml file and make any changes (if required).&lt;/li&gt;
&lt;li&gt;Then click &lt;code&gt;Install&lt;/code&gt;. In just a minute, the external-secrets operator will be added to your cluster, and you will be able to see the outputs as follows.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now your external-secrets operator is installed in the cluster as an Add-on application. You can view the logs, status, and update the configs by going clicking on external-secrets under Add-ons.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Connecting k8s CLI
&lt;/h3&gt;

&lt;p&gt;External Secrets Operator (ESO) is now installed in your cluster. You can start to use it by connecting to your cluster from your Terminal and running the following kubectl commands.&lt;/p&gt;

&lt;p&gt;To access your Kubernetes cluster, you will have to generate AWS credentials through art CLI and connect it to your EKS cluster. This is a quick two step process.&lt;/p&gt;

&lt;p&gt;1: Run &lt;code&gt;art configure generate-aws-credentials&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;2: Run &lt;code&gt;aws eks update-kubeconfig --name &amp;lt;clustername&amp;gt; --region &amp;lt;region&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This connects and gives you access of your cluster in the specified region.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The generated access tokens are valid for 12 hours. These inherit the same privileges granted to the Argonaut account for accessing the aws account and infra. To set up more apps such as Mirantis Lens and k9s, follow this &lt;a href="https://www.argonaut.dev/docs/Configs/access-k8s-cluster"&gt;docs page&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3. secret-store.yaml
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ClusterSecretStore&lt;/code&gt; is a file having the definition of how the &lt;code&gt;external-secret&lt;/code&gt; operator can find the secrets from an external secret store. This includes two main things, an existing secret to provide authentication to the external secret store (AWS secret manager linked account) and the AWS project identifier of that particular secret that you wish to access. Make sure you have &lt;code&gt;kubectl&lt;/code&gt; installed before proceeding further.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vault-backend"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# provider field contains the configuration to access the provider&lt;/span&gt;
    &lt;span class="na"&gt;vault&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://your-domain:8200"&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kv"&lt;/span&gt; &lt;span class="c1"&gt;# Path is the mount path of the Vault KV backend endpoint&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1"&lt;/span&gt;
      &lt;span class="na"&gt;auth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;tokenSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# static token: https://www.vaultproject.io/docs/auth/token&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vault-token"&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: A &lt;code&gt;SecretStore&lt;/code&gt; file is also similar but is namespaced and maps to exactly one instance of an external API.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;--filename&lt;/span&gt; secret-store.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. external-secret.yaml
&lt;/h3&gt;

&lt;p&gt;This file tells the &lt;code&gt;external-secret&lt;/code&gt; operator what specific data is to be fetched from your external secret store. It has three sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Secret Store reference - This references the previous resource &lt;code&gt;ClusterSecretStore&lt;/code&gt; we created. It tells external-secrets on how to access the resources.&lt;/li&gt;
&lt;li&gt; Target - This is the target secret that should be created. This defines the name and type of secret that is created in Kubernetes secret. For example PostgreSQL.&lt;/li&gt;
&lt;li&gt; DataFrom - This defines the name of the secret as it is stored in your external AWS secret store.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external-secrets.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ExternalSecret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-first-secret"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secretStoreRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vault-backend&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SecretStore&lt;/span&gt;  &lt;span class="c1"&gt;# or ClusterSecretStore&lt;/span&gt;
&lt;span class="err"&gt;  &lt;/span&gt;&lt;span class="na"&gt;refreshInterval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;15s"&lt;/span&gt; &lt;span class="c1"&gt;# How often this secret is synchronized&lt;/span&gt;
  &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Our target Kubernetes Secret&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-first-aws-secret&lt;/span&gt; &lt;span class="c1"&gt;# If not present, then the secretKey field under data will be used&lt;/span&gt;
    &lt;span class="na"&gt;creationPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Owner"&lt;/span&gt; &lt;span class="c1"&gt;# This will create the secret if it doesn't exist. Options are 'Owner', 'Merge', or 'None'&lt;/span&gt;
        &lt;span class="na"&gt;deletionPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Retain"&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-first-aws-secret-key&lt;/span&gt;
      &lt;span class="na"&gt;remoteRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;key:message&lt;/span&gt; &lt;span class="c1"&gt;# This is the remote key in the secret provider (might change in meaning based on your provider)&lt;/span&gt;
        &lt;span class="s"&gt;property:value&lt;/span&gt; &lt;span class="c1"&gt;# The property inside of the secret inside your secret provider&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl apply --namespace argonaut --filename external-secret.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this is done, the secrets from your external secret store are brought securely to your cluster and are available to access as Kubernetes secrets. This can be checked by executing the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl --namespace argonaut get ExternalSecret my-first-secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              STORE           REFRESH INTERVAL   STATUS
my-first-secret   vault-backend   15s                SecretSynced
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: The secrets are now in Kubernetes and available to anyone with access to the cluster.&lt;/p&gt;

&lt;p&gt;Whenever you update a secret directly on your external secret store, it takes a few moments for it to reflect in your Kubernetes cluster. This refresh interval can be set as a part of your configuration.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  5. Conclusion
&lt;/h3&gt;

&lt;p&gt;You have now successfully set up &lt;a href="http://external-secrets.io"&gt;external-secrets&lt;/a&gt;. This powerful open source tool allows you to manage secrets in a more secure way. You can also choose from various other &lt;a href="https://external-secrets.io/v0.7.2/provider/aws-secrets-manager/"&gt;secret manager tools&lt;/a&gt; like the AWS Secrets Manager or GCP Secrets Manager.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;external-secrets&lt;/code&gt; approach allows you to securely manage secrets and bring them to your cluster as needed. There are also other approaches to the same that we discuss in &lt;a href="https://www.argonaut.dev/blog/secret-management-in-kubernetes"&gt;this blog&lt;/a&gt;. &lt;code&gt;external-secrets&lt;/code&gt; can do so much more like generating secrets, CRDs, controller classes, and multi-tenancy. And many more exciting things &lt;a href="https://github.com/orgs/external-secrets/projects/2/views/1"&gt;in their roadmap&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;Argonaut has a native secret management solution for scaling teams. Argonaut integrations with third party secret providers is coming soon Q1CY23.&lt;/p&gt;

&lt;p&gt;With Argonaut’s modern deployment platform, you can get up and running on AWS or GCP in minutes, not weeks. Our intuitive UI and quick integrations with GitHub, GitLab, AWS, and GCP make managing your infra and applications easy - all in one place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.argonaut.dev/"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8LLZouhk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pv9wlp2s7y0ne04w0xi8.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vault</category>
      <category>security</category>
      <category>aws</category>
      <category>secretmanagement</category>
    </item>
    <item>
      <title>GitOps Primer: The Benefits, Workflow, and Implementation of GitOps</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Wed, 15 Feb 2023 11:12:55 +0000</pubDate>
      <link>https://dev.to/argonaut/gitops-primer-the-benefits-workflow-and-implementation-of-gitops-4dg6</link>
      <guid>https://dev.to/argonaut/gitops-primer-the-benefits-workflow-and-implementation-of-gitops-4dg6</guid>
      <description>&lt;p&gt;GitOps is one of the biggest shifts in development methodologies in a long time. Since its introduction in 2017, there has only been a constant growth in this methodology of managing infra and apps. As we mentioned in our &lt;a href="https://www.argonaut.dev/blog/seven-cloud-trends-2023"&gt;7 cloud trends for 2023 blog&lt;/a&gt;, GitOps will continue to see a rise in adoption this year, and we will also see &lt;a href="https://argo-cd.readthedocs.io/en/stable/roadmap/#roadmap"&gt;new features&lt;/a&gt; and &lt;a href="https://fluxcd.io/roadmap/"&gt;enhancements&lt;/a&gt; from the popular tools.&lt;/p&gt;

&lt;p&gt;This article is the first one in the series covering GitOps, GitOps tools, best practices, and a detailed comparison of popular GitOps tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GitOps?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitOps&lt;/strong&gt; is the practice of using Git as a single source of truth for declarative infrastructure and applications. It enables users to use Git version control and automate deployment, testing, and operations processes.&lt;/p&gt;

&lt;p&gt;Even though some of the GitOps practices were followed for several years, the team &lt;strong&gt;GitOps&lt;/strong&gt; was coined in 2017 by Alexis Richardson. Since its introduction, GitOps has seen rapid growth in adoption, with more and more organizations seeing the value in the improved collaboration, reliability, and security of their applications. In the years since, we have seen the launch of popular tools such as ArgoCD and Flux CD, which have made it easier to implement GitOps in organizations.&lt;/p&gt;

&lt;p&gt;In 2022, these tools achieved &lt;a href="https://www.techtarget.com/searchitoperations/news/252528152/GitOps-hits-stride-as-CNCF-graduates-Flux-CD-and-Argo-CD"&gt;CNCF graduate status&lt;/a&gt;, crossing the chasm and ready to take on enterprise workloads. As more organizations adopt GitOps, the tools, and processes surrounding it are constantly being improved, allowing organizations to reap the benefits of faster and more efficient deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitOps vs traditional deployments
&lt;/h3&gt;

&lt;p&gt;GitOps is different from traditional deployment methods in a few key ways. Firstly, it utilizes a code repository as the source of truth, with all changes and updates reflected in the repository. This makes it easier to track changes, as well as to roll back any changes if needed. Secondly, it implements Continuous Integration and Deployment (CI/CD) pipelines to ensure that the code that is deployed is always in sync with the code in the repository. It also enables automated rollbacks and disaster recovery in case of unforeseen issues.&lt;/p&gt;

&lt;p&gt;Finally, GitOps relies on principles such as immutable infrastructure, declarative configuration, observability, and auditability. These principles ensure reliable and secure deployments and improved collaboration and communication. All of these aspects make GitOps an attractive option for organizations that are looking to improve their DevOps practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key benefits of GitOps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;It improves collaboration between developers by allowing them to quickly and easily make changes to the codebase without fear of breaking the system.&lt;/li&gt;
&lt;li&gt;It provides a unified platform for managing deployments, with all of the code stored in the same repository. This uniformity makes it easier for developers to understand the process and debug and troubleshoot any issues that may arise.&lt;/li&gt;
&lt;li&gt;GitOps also increases reliability and security by enabling automated rollbacks and disaster recovery in case of unforeseen issues. This is possible due to the principles of immutable infrastructure, declarative configuration, observability, and audibility that GitOps relies on.&lt;/li&gt;
&lt;li&gt;GitOps also enables faster and more efficient deployments, significantly reducing the risk of human error and allowing teams to get their applications up and running quickly.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The GitOps workflow
&lt;/h2&gt;

&lt;p&gt;The GitOps workflow has been enabled by other foundational technologies, like the rise in the adoption of Kubernetes, the declarative paradigm of defining infrastructure using IaC tools, and containerization. Let us learn about the three main components of the GitOps workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Single source of truth
&lt;/h3&gt;

&lt;p&gt;A key component of GitOps is using a code repository as the source of truth. In this model, all changes to your IaC code (e.g. Terraform) and updates are reflected in the repository, which allows developers to easily track changes and roll back to a previous version if needed. Additionally, the repository being the source of truth, enables developers to share and collaborate on the codebase more efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  CI/CD pipelines
&lt;/h3&gt;

&lt;p&gt;This code repository is also used to set up continuous integration and deployment (CI/CD) pipelines that ensure that the code that is deployed is always in sync with the code in the repository. By utilizing these pipelines, developers can ensure that their code is always up-to-date and that any changes are properly tested and deployed.&lt;/p&gt;

&lt;p&gt;The CI/CD pipelines can work on a pull-based or push-based model. A push-based pipeline means that code starts with the CI system and may continue its path through a series of encoded scripts or uses ‘kubectl’ by hand to push any changes to the Kubernetes cluster. In a pull-based pipeline, a Deployment Automator watches the image registry, and a Deployment Synchronizer residing in the cluster maintains its state. In the middle of it all, we have a single source of truth for manifests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automated rollbacks and disaster recovery
&lt;/h3&gt;

&lt;p&gt;GitOps also enables automated rollbacks and disaster recovery in case of unforeseen issues. This is done by leveraging the code repository as the source of truth and using CI/CD pipelines to ensure that the code that is deployed is always in sync with the code in the repository. Thus, if any issues arise, developers can quickly roll back to a previous version and recover from any issues without having to manually intervene.&lt;/p&gt;

&lt;p&gt;These processes together make GitOps a powerful way of developing and deploying code and infrastructure. Its benefits are even more visible as the complexity of one’s cloud setup grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing GitOps
&lt;/h2&gt;

&lt;p&gt;Once you have decided to implement GitOps in your organization, the implementation can be broken down into three main steps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up GitOps pipelines&lt;/li&gt;
&lt;li&gt;Automate deployments&lt;/li&gt;
&lt;li&gt;Set up monitoring and logging&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The GitOps pipelines:&lt;/strong&gt; There are a few different ways to set up GitOps pipelines, depending on the tools and processes that your organization is using. The most popular tools for setting up GitOps pipelines are ArgoCD and Flux CD, both of which are open-source projects and have achieved CNCF graduate status.&lt;/p&gt;

&lt;p&gt;To set up a GitOps pipeline with either of these tools, you need to first create a repository in a version control system such as GitHub. Then, you need to define the CI/CD pipelines with the tools, which will ensure that the code in the repository is always in sync with the deployed code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note: The implementation of GitOps requires different toolchains for Apps and for Infra. ArgoCD and Flux are primarily for deploying apps on kubernetes. Crossplane’s implementation is a very clever implementation that leverages Kubernetes control loops to handle both Apps and Infra resources. We’ll cover it in a future blog.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Automating Deployments with GitOps:&lt;/strong&gt; Once you have set up the GitOps pipelines, you can start automating deployments using GitOps. This process involves setting up the deployment synchronizer, which will watch the image registry and ensure that the state of the cluster is maintained. The deployment synchronizer will then deploy the code from the repository to the cluster automatically, and you can verify the changes by viewing the logs in the repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Monitoring and Logging for GitOps:&lt;/strong&gt; Once the GitOps pipelines and automated deployments are set up, it is important to set up monitoring and logging for the system. This is necessary to ensure that the system is working as expected and to identify any issues that may arise. Monitoring and logging can be set up using various tools such as Prometheus, Grafana, and ELK stack. &lt;a href="https://www.argonaut.dev/blog/observability-top-20"&gt;These tools&lt;/a&gt; allow for tracking of metrics, logs, and errors in the system, which can then be used to identify and troubleshoot any issues that may arise.&lt;/p&gt;

&lt;p&gt;Additionally, setting up proper alerting systems is also important to ensure that any issues are quickly identified and addressed. This will help ensure that the GitOps system runs smoothly and that any potential issues are quickly identified and resolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros and Cons of using GitOps in an organization
&lt;/h2&gt;

&lt;p&gt;GitOps is becoming increasingly popular as an approach to DevOps and is seen as a valuable tool for modern organizations. While there are many benefits to using GitOps, there are also some drawbacks that should be considered before adopting this methodology.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros of using GitOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Improved collaboration and communication between developers due to a single source of truth for the codebase&lt;/li&gt;
&lt;li&gt;Processes are lightweight and vendor-neutral&lt;/li&gt;
&lt;li&gt;Increased reliability and security due to automated rollbacks and disaster recovery&lt;/li&gt;
&lt;li&gt;Faster and more efficient deployments due to automated CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Reduced risk of human error due to automation of processes&lt;/li&gt;
&lt;li&gt;Improved observability and audibility of the system&lt;/li&gt;
&lt;li&gt;Automating infra definition and testing reduces manual work and lowers cost&lt;/li&gt;
&lt;li&gt;Faster environment duplication with immutable and reproducible deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons of using GitOps
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Steep learning curve for developers new to the methodology&lt;/li&gt;
&lt;li&gt;Dependency on Git and GitOps tools, which can be costly and complex to manage&lt;/li&gt;
&lt;li&gt;Complexity of GitOps processes, which can be difficult to debug and troubleshoot&lt;/li&gt;
&lt;li&gt;Does not come with a centralized secret management solution&lt;/li&gt;
&lt;li&gt;Resistance to change in traditional organizations, as the adoption of GitOps requires a shift in mindset and processes&lt;/li&gt;
&lt;li&gt;Git is not designed for programmatic updates, which might cause conflicts requiring manual resolution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, adopting GitOps processes is a great way to modernize your DevOps practices. It enables organizations to quickly and easily make changes to the codebase without fear of breaking the system. Based on the above pros and cons list, you’ll see that the adoption of GitOps requires a shift in mindset and processes and can be complex to manage.&lt;/p&gt;

&lt;p&gt;The future posts in this series will explore the tools that enable GitOps for app deployments along with an in-depth comparison between ArgoCD and FluxCD.&lt;/p&gt;




&lt;p&gt;Start using &lt;a href="https://www.argonaut.dev/"&gt;Argonaut&lt;/a&gt; today to take advantage of GitOps best practices and streamline your deployment workflows. Argonaut also provides a single UI for both App and Infra deployments and improves team collaboration.&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>git</category>
      <category>tutorial</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Secret Management Primer: Challenges, Standards, and Best Practices</title>
      <dc:creator>Argonaut</dc:creator>
      <pubDate>Wed, 15 Feb 2023 11:07:48 +0000</pubDate>
      <link>https://dev.to/argonaut/secret-management-primer-challenges-standards-and-best-practices-3a5f</link>
      <guid>https://dev.to/argonaut/secret-management-primer-challenges-standards-and-best-practices-3a5f</guid>
      <description>&lt;p&gt;Secret management is the process of securely storing and managing sensitive information such as keys, passwords, and tokens. Secrets are used to provide privileged access and are usually stored in secure environments using secret management tools. Secrets are essential to establish the right access and connections on both the app and the infra side of things.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are secret management tools?
&lt;/h2&gt;

&lt;p&gt;Secret management tools are made for this specific task - to manage, organize, safely store and retrieve secret information. We cover some popular tools in &lt;a href="https://www.argonaut.dev/blog/tools-for-secret-management"&gt;this blog&lt;/a&gt; and suggest actionable steps for choosing the right tool to suit your requirements. These tools help address the challenges across SaaS, LaaS, PaaS, private, and hybrid multi-cloud scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges of secret management
&lt;/h2&gt;

&lt;p&gt;The challenges of managing secrets can occur in the various stages of the secret’s lifecycle, from generation and storing to distribution and revocation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Secrets at rest&lt;/strong&gt; can be vulnerable, especially if they are a part of the automated CI/CD pipelines and must be exposed to external tools or stored in the code repository or local device.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secret distribution&lt;/strong&gt; to authorized users has to be done securely. This may involve the use of secure communication channels, specific tools (Doppler, 1Password, &lt;code&gt;not&lt;/code&gt; slack) or physical measures such as security tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Static secrets&lt;/strong&gt; that don’t change in value can cause serious security issues as they may easily be shared, and recovering or updating their value can be difficult especially if the same value is used in multiple places.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: It is important to ensure one’s secret management practices comply with the legal requirements and regulations defined either in data privacy laws or industry standards. We will talk about some industry standards in the best practices section below. Manually managing secrets makes it hard to comply with such standards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: As the company scales, the issue of &lt;strong&gt;secret sprawl&lt;/strong&gt; arises, which makes credentials difficult to track and manage, hence more vulnerable to hacking. According to &lt;a href="https://www.verizon.com/business/resources/reports/2022/dbir/2022-data-breach-investigations-report-dbir.pdf"&gt;a report from Verizon&lt;/a&gt;, stolen credentials account for nearly half of all data breaches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Human factors&lt;/strong&gt;: People are an important part of secret management, but they can also be a weak point. Ensuring that individuals handle secrets responsibly and follow established procedures is critical to the system’s security.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benefits of Secret Management Tools
&lt;/h2&gt;

&lt;p&gt;Secret management tools like Vault, AWS Secret Manager, Doppler, and many others &lt;a href="https://www.argonaut.dev/blog/tools-for-secret-management"&gt;discussed here&lt;/a&gt; can help organizations overcome the various challenges of secret management practice discussed above.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storing secrets&lt;/strong&gt;: All secret management tools act as a central store for your secrets; this includes passwords, SSL certificates, and API keys. This makes it easy to manage and protect secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access controls&lt;/strong&gt;: These tools can enforce access control to ensure that only authorized users access specific secrets. Types of access controls can be based on identity, location, or roles. You can also easily set limits on the count of clients and revoke or update access to any user.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated distribution&lt;/strong&gt;: Secret management tools can automate the process of distributing secrets to authorized users and revoking access when necessary. This can help reduce the risk of human error and improve the efficiency of secret management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance&lt;/strong&gt;: Secret management tools make it easier for organizations to comply with regulatory and industry standards, such as data privacy laws and PCI DSS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrations&lt;/strong&gt;: Their ability to integrate with other systems and tools, such as configuration management tools and CI/CD pipelines, helps streamline the management of secrets in a cloud environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: These tools are built to enterprise standards and grow as your organization grows. Having a centralized store with good access control and compliance policies also reduces the risks of secret sprawl.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Secret Management Best Practices
&lt;/h2&gt;

&lt;p&gt;Once you’ve chosen the right secret management tool and are ready to make the most of the benefits, it’s time to check off these best practices.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Consolidate your secrets and secure them&lt;/strong&gt;: If you’re using any kind of developer tool or cloud service, you will already be using secrets. Some of them may lie as plain text on your config files or encrypted and pushed to your Git repo. You must first bring them to a secret management tool to secure them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rotation&lt;/strong&gt;: Secrets can be reset or changed based automatically on a schedule. It is a good practice to rotate your secrets constantly. In fact, many compliance standards require that secrets are rotated regularly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Secrets no longer need to be hard-coded or embedded. They can be injected directly into the pipeline in most cases and made available for the tools/users that need access to a particular resource.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create and enforce policy&lt;/strong&gt;: Cloud security policies are essential components that provide a formal guideline for how a company operates in the cloud. Creating and enforcing these policies help companies reduce the risk and also assures their customers that their data is protected. Here’s &lt;a href="https://www.techtarget.com/searchsecurity/tip/How-to-create-a-cloud-security-policy-step-by-step"&gt;how to create&lt;/a&gt; a cloud security policy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Least privilege access&lt;/strong&gt;: Enforcing least privilege access for machines and humans means that access to the secret is given a need-to-know, just-in-time access to the specific secret for a specified duration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable solution&lt;/strong&gt;: Picking a tool that scales as your needs and secrets rise and can work just as efficiently with your future systems and secrets. There is an extra price, but it is worth paying for securely managing secrets.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Some Industry standards for Secret management
&lt;/h2&gt;

&lt;p&gt;There are several industry standards and best practices for cloud secret management:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://www.iso.org/isoiec-27001-information-security.html"&gt;ISO 27001&lt;/a&gt;: Is an international standard that outlines the best practices for information security management, including the secure handling of secrets.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://csrc.nist.gov/publications/detail/sp/800-53/rev-5/final"&gt;NIST SP 800-53&lt;/a&gt;: The National Institute of Standards and Technology (NIST) publishes guidelines for securing federal information systems, including recommendations for secret management.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.pcisecuritystandards.org/"&gt;PCI DSS&lt;/a&gt;: The Payment Card Industry Data Security Standard (PCI DSS) is a set of requirements for organizations that handle credit card information. It includes guidelines for securing secrets and protecting against unauthorized access.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.cisecurity.org/controls/cis-controls-list"&gt;Center for Internet Security (CIS) Controls&lt;/a&gt;: The CIS Controls are a set of best cybersecurity practices organized into 20 control families. Secret management is addressed in several of the control families, including Access Control and Maintenance.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloudsecurityalliance.org/research/cloud-controls-matrix/"&gt;Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM)&lt;/a&gt;: The CSA CCM is a framework for evaluating the security of cloud computing environments. It is a set of requirements to certify that all companies that collect and transmit credit card information maintain a secure environment. It also includes secret management controls such as using strong passwords and implementing access controls.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://us.aicpa.org/interestareas/frc/assuranceadvisoryservices/serviceorganization-smanagement"&gt;Service Organization Control (SOC 2)&lt;/a&gt;: This compliance is intended to provide assurance to customers that a service organization has implemented the necessary controls to protect sensitive data. It includes risk assesssments, disaster recovery procedures, confidentiality and integrity control for sensitive data.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.hhs.gov/hipaa/index.html"&gt;Health Information Privacy Protection Act (HIPAA)&lt;/a&gt;: Under HIPAA, covered entities, such as health care providers and insurance companies, are required to implement policies and procedures for safeguarding protected health information (PHI). This includes encryption, access, and audit controls to protect electronic protected health information (ePHI).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is important for organizations to understand and comply with relevant industry standards and best practices when implementing cloud secret management. Based on your region of operation, there may be local laws such as &lt;a href="https://oag.ca.gov/privacy/ccpa"&gt;CCPA&lt;/a&gt; or &lt;a href="https://gdpr-info.eu/"&gt;GDPR&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out the other blogs in the secret management series where we talk about top &lt;a href="https://dev.to/blog/tools-for-secret-management"&gt;cloud secret management tools&lt;/a&gt; and also about &lt;a href="https://dev.to/blog/secret-management-in-kubernetes"&gt;kubernetes secret management approaches&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://argonaut.dev"&gt;Argonaut&lt;/a&gt; has a native secret management solution for small teams. Argonaut integrations with third party secret providers is coming in Q1CY23.&lt;/p&gt;

&lt;p&gt;With Argonaut’s modern deployment platform, you can get up and running on AWS or GCP in minutes, not weeks. Our intuitive UI and quick integrations with GitHub, GitLab, AWS, and GCP make managing your infra and applications easy - all in one place.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>security</category>
      <category>secretmanagement</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
