<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Datree</title>
    <description>The latest articles on DEV Community by Datree (@datreeio).</description>
    <link>https://dev.to/datreeio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/datreeio"/>
    <language>en</language>
    <item>
      <title>ArgoCD Best Practices You Should Know</title>
      <dc:creator>Itamar Ben Yair</dc:creator>
      <pubDate>Thu, 07 Apr 2022 11:02:57 +0000</pubDate>
      <link>https://dev.to/datreeio/argocd-best-practices-you-should-know-bfe</link>
      <guid>https://dev.to/datreeio/argocd-best-practices-you-should-know-bfe</guid>
      <description>&lt;p&gt;My DevOps journey kicked off when we started to develop Datree - an open-source CLI tool that aims to help DevOps engineers to prevent Kubernetes misconfigurations from reaching production. One year later, seeking best practices and more ways to prevent misconfigurations became my way of life.&lt;/p&gt;
&lt;p&gt;This is why when I first learned about Argo CD the thought of using Argo without knowing its pitfalls and complications simply didn’t make sense to me. After all, it’s probable that configuring it incorrectly can easily cause the next production outage.&lt;/p&gt;
&lt;p&gt;In this article, we’ll explore some of the best practices of Argo that I've found and learn how we can validate our custom resources against these best practices.&lt;/p&gt;
&lt;h2&gt;Argo Best Practices&lt;/h2&gt;
&lt;h3&gt;1. Disallow providing an codepty retryStrategy (i.e.{})&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Workflows&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;  The user can specify a &lt;code&gt;retryStrategy&lt;/code&gt; that will dictate how failed or errored steps are retried in a workflow. Providing an codepty &lt;code&gt;retryStrategy&lt;/code&gt; (i.e. &lt;code&gt;retryStrategy: {}&lt;/code&gt;) will cause a container to retry until completion and eventually cause OOM issues.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://github.com/argoproj/argo-workflows/blob/master/examples/README.md#retrying-failed-or-errored-steps"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;2. Ensure that Workflow pods are not configured to use the default service account&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Workflows&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;  All pods in a workflow run with a service account which can be specified in the &lt;code&gt;workflow.spec.serviceAccountName.&lt;/code&gt; If omitted, Argo will use the &lt;code&gt;default&lt;/code&gt; service account of the workflow's namespace.  This provides the workflow(i.e the pod) the ability to interact with the Kubernetes API server. This allows attackers with access to a single container to abuse Kubernetes by using the &lt;code&gt;AutomountServiceAccountToken&lt;/code&gt;. If by any chance, the option for &lt;code&gt;AutomountServiceAccountToken&lt;/code&gt; was disabled then the default service account that Argo will use won’t have any permissions, and the workflow will fail.&lt;/p&gt;
&lt;p&gt;It’s recommended to create dedicated user-managed service accounts with the appropriate roles.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://hackersvanguard.com/abuse-kubernetes-with-the-automountserviceaccounttoken/"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;3. Ensure label &lt;code&gt;part-of: argocd&lt;/code&gt; exists for ConfigMaps&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo CD&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt; Related ConfigMap resources that aren’t labeled with &lt;code&gt;app.kubernetes.io/part-of: argocd&lt;/code&gt;, won’t be used by Argo CD.&lt;/p&gt;
&lt;p&gt;When installing Argo CD, its atomic configuration contains a few services and &lt;code&gt;configMaps&lt;/code&gt;. For each specific kind of ConfigMap and Secret resource, there is only a single supported resource name (as listed in the above table) - if you need to merge things you need to do it before creating thcode. It’s important to annotate your ConfigMap resources using the label &lt;code&gt;app.kubernetes.io/part-of: argocd&lt;/code&gt;, otherwise, Argo CD will not be able to use thcode.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#atomic-configuration"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;4. Disable with DAG to set FailFast=false&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Workflows&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt; As an alternative to specifying sequences of steps in &lt;code&gt;Workflow&lt;/code&gt;, you can define the workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. The DAG logic has a built-in &lt;code&gt;fail fast&lt;/code&gt; feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes has failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The &lt;a href="https://github.com/argoproj/argo-workflows/blob/master/examples/dag-disable-failFast.yaml"&gt;FailFast&lt;/a&gt; flag default is &lt;code&gt;true&lt;/code&gt;. If set to &lt;code&gt;false&lt;/code&gt;, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; more info and example about this feature &lt;a href="https://github.com/argoproj/argo-workflows/issues/1442"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;5. Ensure Rollout pause step has a configured duration&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Rollouts&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;  For every Rollout, we can define a list of steps. Each step can have one of two fields:  &lt;code&gt;setWeight&lt;/code&gt; and &lt;code&gt;pause&lt;/code&gt;. The &lt;code&gt;setWeight&lt;/code&gt; field dictates the percentage of traffic that should be sent to the canary, and the &lt;code&gt;pause&lt;/code&gt; literally instructs the rollout to pause.&lt;/p&gt;
&lt;p&gt;Under the hood, the Argo controller uses these steps to manipulate the ReplicaSets during the rollout.  When the controller reaches a &lt;code&gt;pause&lt;/code&gt; step for a rollout, it will add a &lt;code&gt;PauseCondition&lt;/code&gt; struct to the &lt;code&gt;.status.PauseConditions&lt;/code&gt; field. If the &lt;code&gt;duration&lt;/code&gt; field within the &lt;code&gt;pause&lt;/code&gt; struct is set, the rollout will not progress to the next step until it has waited for the value of the &lt;code&gt;duration&lt;/code&gt; field. However, if the &lt;code&gt;duration&lt;/code&gt; field has been omitted, &lt;strong&gt;the rollout might wait indefinitely&lt;/strong&gt; until the added pause condition will be rcodeoved.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://argoproj.github.io/argo-rollouts/features/canary/#overview"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;6. Specify Rollout’s &lt;code&gt;revisionHistoryLimit&lt;/code&gt;
&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Rollouts&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt; the &lt;code&gt;.spec.revisionHistoryLimit&lt;/code&gt; is an optional field that indicates the number of old ReplicaSets which should be retained in order to allow rollback. These old ReplicaSets consume resources in &lt;code&gt;etcd&lt;/code&gt; and crowd the output of &lt;code&gt;kubectl get rs&lt;/code&gt;. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to roll back to that revision of Deployment.&lt;/p&gt;
&lt;p&gt;By default, 10 old ReplicaSets will be kept, however, its ideal value depends on the frequency and stability of new Deployments. More specifically, setting this field to zero means that all old ReplicaSets with 0 replicas will be cleaned up. In this case, a new Deployment rollout cannot be undone, since its revision history is cleaned up.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#revision-history-limit"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;7. Set scaleDownDelaySeconds to 30s to ensure IP table propagation across the nodes in a cluster&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Rollouts&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;  When the rollout changes the selector on service, there is a propagation delay before all the nodes update their IP tables to send traffic to the new pods instead of the old. Traffic will be directed to the old pods if the nodes have not been updated yet during this delay.In order to prevent the packets from being sent to a node that killed the old pod, the rollout uses the &lt;code&gt;scaleDownDelaySeconds&lt;/code&gt; field to give nodes enough time to broadcast the IP table changes. If omitted, the Rollout waits 30 seconds before scaling down the previous ReplicaSet.&lt;/p&gt;
&lt;p&gt;It’s recommended to set &lt;code&gt;scaleDownDelaySeconds&lt;/code&gt; to a minimum of 30 seconds in order to ensure that the IP table propagation across the nodes in a cluster. The reason is that Kubernetes waits for a specified time called the termination grace period. By default, this is 30 seconds.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://argoproj.github.io/argo-rollouts/features/specification/"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;8. Ensure retry on both Error and TransientError&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Workflows&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt;  &lt;code&gt;retryStrategy&lt;/code&gt; is an optional field of the &lt;code&gt;Workflow&lt;/code&gt; CRD, that provides controls for retrying a workflow step. One of the fields of &lt;code&gt;retryStrategy&lt;/code&gt; is &lt;code&gt;retryPolicy&lt;/code&gt;, which defines the policy of NodePhase statuses that will be retried (NodePhase is the condition of a node at the current time).The options for &lt;code&gt;retryPolicy&lt;/code&gt; can be either: &lt;code&gt;Always&lt;/code&gt;, &lt;code&gt;OnError&lt;/code&gt;, or &lt;code&gt;OnTransientError&lt;/code&gt;. In addition, the user can use an &lt;a href="https://argoproj.github.io/argo-workflows/retries/#conditional-retries"&gt;&lt;code&gt;expression&lt;/code&gt;&lt;/a&gt; to control more of the retries.&lt;/p&gt;
&lt;p&gt;What’s the catch?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;retryPolicy=Always is too much.&lt;/strong&gt; The user only wants to retry on systcode-level errors (eg, the node dying or being precodepted), but not on errors occurring in user-level code since these failures indicate a bug. In addition, this option is more suitable for long-running containers than workflows which are jobs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;retryPolicy=OnError doesn't handle precodeptions&lt;/strong&gt;: &lt;code&gt;retryPolicy=OnError&lt;/code&gt; handles &lt;code&gt;some&lt;/code&gt; systcode-level errors like the node disappearing or the pod being deleted. However, during graceful Pod termination, the &lt;code&gt;kubelet&lt;/code&gt; assigns a &lt;code&gt;Failed&lt;/code&gt; status and a &lt;code&gt;Shutdown&lt;/code&gt; reason to the terminated Pods. As a result, node precodeptions result in node status "Failure", not "Error" so precodeptions aren't retried.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;retryPolicy=OnError doesn't handle transient errors&lt;/strong&gt; classifying a precodeption failure message as a transient error is allowed however, this requires &lt;code&gt;retryPolicy=OnTransientError&lt;/code&gt;. (see also, TRANSIENT_ERROR_PATTERN).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;We recommend to set &lt;code&gt;retryPolicy: "Always"&lt;/code&gt; and use the following expression: &lt;code&gt;'lastRetry.status == "Error" or (lastRetry.status == "Failed" and asInt(lastRetry.exitCode) not in [0])'&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;9. Ensure &lt;strong&gt;progressDeadlineAbort set to true&lt;/strong&gt;, especially if &lt;strong&gt;progressDeadlineSeconds&lt;/strong&gt; has been set&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo Rollouts&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt; A user can set &lt;code&gt;progressDeadlineSeconds&lt;/code&gt; which states the maximum time in seconds in which a rollout must make progress during an update before it is considered to be failed.&lt;/p&gt;
&lt;p&gt;If rollout pods get stuck in an error state (e.g. image pull back off), the rollout degrades after the progress deadline is exceeded but the bad replica set/pods aren't scaled down. The pods would keep retrying and eventually the rollout message would read &lt;code&gt;ProgressDeadlineExceeded: The replicaset &amp;lt;name&amp;gt; has timed out progressing&lt;/code&gt;. To abort the rollout, the user should set both &lt;code&gt;progressDeadlineSeconds&lt;/code&gt; and &lt;code&gt;progressDeadlineAbort&lt;/code&gt;, with &lt;code&gt;progressDeadlineAbort: true&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://githubmcodeory.com/repo/argoproj/argo-rollouts/issues/1593"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;10. &lt;strong&gt;Ensure custom resources match the namespace of the ArgoCD instance&lt;/strong&gt;
&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Project:&lt;/strong&gt; Argo CD&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Best practice:&lt;/strong&gt; In each repository, all &lt;code&gt;Application&lt;/code&gt; and &lt;code&gt;AppProject&lt;/code&gt; manifests should match the same &lt;code&gt;metadata.namespace.&lt;/code&gt; The reason why is actually dependent on how you installed Argo CD.&lt;/p&gt;
&lt;p&gt;If you deployed Argo CD in the typical deployment, under the hood Argo CD creates two &lt;code&gt;ClusterRoles&lt;/code&gt; and &lt;code&gt;ClusterRoleBinding&lt;/code&gt;, that reference the &lt;code&gt;argocd&lt;/code&gt; namespace by default. In this case, it’s recommended not only to ensure all Argo CD resources match the namespace of the Argo CD instance, but also to use the &lt;code&gt;argocd&lt;/code&gt; namespace because otherwise, you need to make sure to update the namespace reference in all Argo CD internal resources.&lt;/p&gt;
&lt;p&gt;However, if you deployed Argo CD for external clusters (in “Namespace Isolation Mode”) then instead of &lt;code&gt;ClusterRole&lt;/code&gt; and &lt;code&gt;ClusterRoleBinding&lt;/code&gt;, Argo creates &lt;code&gt;Roles&lt;/code&gt; and associated &lt;code&gt;RoleBindings&lt;/code&gt; in the namespace where Argo CD was deployed. The created service account is granted a limited level of access to manage, so for Argo CD to be able to function as desired, access to the namespace must be explicitly granted. In this case, it’s recommended to make sure all the resources, including the  &lt;code&gt;Application&lt;/code&gt; and &lt;code&gt;AppProject&lt;/code&gt;, use the correct namespace of the ArgoCD instance.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt; &lt;a href="https://blog.andyserver.com/2020/12/argocd-namespace-isolation/"&gt;read more&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;So...Now What?&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;I’m a GitOps believer,&lt;/strong&gt; I believe that every Kubernetes resource should be handled exactly the same as your source code, especially if you are using helm/kustomize. So, the way I see it, we should automatically check our resources on every code change.&lt;/p&gt;
&lt;p&gt;You can write your policies using languages like Rego or JSONSchcodea and use tools like OPA ConfTest or different validators (for example,  ‣) to scan and validate our resources on every change. Additionally, if you have one GitOps repository then Argo plays a great role in providing a centralized repository for you to develop and version control your policies.&lt;/p&gt;
&lt;p&gt;However, writing policies might be a pretty challenging task on its own, especially with Rego.&lt;/p&gt;
&lt;p&gt;Another way would be to look for tools like ‣ which already comes with predefined policies, YAML schcodea validation, and best practices for Kubernetes and Argo.&lt;/p&gt;
&lt;h2&gt;How Datree works&lt;/h2&gt;
&lt;p&gt;The Datree CLI runs automatic checks on every resource that exists in a given path. After the check is completed, Datree displays a detailed output of any violation or misconfiguration it finds, with guidelines on how to fix it:&lt;/p&gt;

</description>
      <category>argocd</category>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>argo</category>
    </item>
    <item>
      <title>How to build a Helm plugin in minutes</title>
      <dc:creator>Roman Labunsky</dc:creator>
      <pubDate>Wed, 16 Jun 2021 13:39:58 +0000</pubDate>
      <link>https://dev.to/datreeio/how-to-build-a-helm-plugin-in-minutes-47i0</link>
      <guid>https://dev.to/datreeio/how-to-build-a-helm-plugin-in-minutes-47i0</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://helm.sh/"&gt;Helm&lt;/a&gt; is a great addition to the Kubernetes ecosystem; it simplifies complex Kubernetes manifests by separating them into charts and values. &lt;/p&gt;

&lt;p&gt;Sharing charts has never been easier especially since all the customizable parameters are located separately (values.yaml). The downside of this is that there’s no single place to see the resulting manifest, as it’s usually compiled and installed in a single step - &lt;code&gt;helm install&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Helm alleviates this with a plugin system that allows you to seamlessly integrate with the Helm flow and easily run custom code that’s not part of the Helm core code which, as we’ll see in the next section, can be written in any language, even Bash.&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple example of how to create a Helm plugin
&lt;/h2&gt;

&lt;p&gt;Let's build a simple plugin that prints out some useful information and environment variables that Helm provides.&lt;/p&gt;

&lt;p&gt;A helm plugin consists of a single required file - &lt;code&gt;plugin.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s the entrypoint for Helm. By using this, Helm knows the settings for your plugin, the command it executes when the plugin is run and hooks for customizing the plugin lifecycle (more on that later).&lt;/p&gt;

&lt;p&gt;This is the only required file for a complete plugin. You can set it up in 3 steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a plugin.yaml with the following content
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the installation command from the directory of the file&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm plugin install .
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Execute the plugin&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm myplugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  A complete example of a Helm plugin
&lt;/h2&gt;

&lt;p&gt;The simple plugin example covers the most basic use cases, goals that can be achieved even with an alias. More complex cases require more points of integration with Helm and greater customizability of the plugin’s lifecycle. The following parts will give you all the necessary tools to execute any logic in the Helm flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lifecycle customization - Install, update and delete hooks
&lt;/h3&gt;

&lt;p&gt;Helm provides under-documented (the closest thing I found to documentation was &lt;a href="https://github.com/helm/helm/blob/bf486a25cdc12017c7dac74d1582a8a16acd37ea/pkg/plugin/hooks.go"&gt;here&lt;/a&gt;) capability for hooking into the plugin’s &lt;a href="https://helm.sh/docs/helm/helm_plugin/"&gt;install, update or uninstall commands&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Each hook corresponds to a command and will execute the provided script when invoked. &lt;/p&gt;

&lt;p&gt;Some plugins may require a more complex installation flow: Downloading a binary based on OS architecture, building or compiling code or simply installing system dependencies. The install script is the place to do it. A useful example can be found &lt;a href="https://github.com/datreeio/helm-datree/blob/6e498e5e966f36a38f67e986022c74781da865b1/scripts/install.sh"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Execution integration
&lt;/h3&gt;

&lt;p&gt;There are two ways to specify what will be executed and when:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;command&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;platformCommand&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It's important to note that Helm has a hierarchy for choosing the correct command:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;platformCommand(os+arch match)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;platformCommand(os only match)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;command&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Helm plugins aren’t executed in a shell, so complex commands must be part of a script that will be executed every time the plugin is invoked. &lt;/p&gt;

&lt;p&gt;A script can be used to run complex logic, handle parameter parsing, etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: "$HELM_PLUGIN_DIR/scripts/run.sh"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A comprehensive example of a run script can be found &lt;a href="https://github.com/datreeio/helm-datree/blob/6e498e5e966f36a38f67e986022c74781da865b1/scripts/run.sh"&gt;here&lt;/a&gt;. A very useful thing that we can do is to render the chart and then execute logic on the resulting Kubernetes yaml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm template "${HELM_OPTIONS[@]}" &amp;gt; ${TEMP_MANIFEST_NAME}

$HELM_PLUGIN_DIR/bin/myplugin ${TEMP_MANIFEST_NAME} "${MYPLUGIN_OPTIONS[@]}"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A very important flag in &lt;code&gt;plugin.yaml&lt;/code&gt; is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ignoreFlags: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This flag specifies whether or not the command line params are passed to the plugin or not. If the plugin accepts parameters, this should be set to false.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tips and caveats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Helm exposes many &lt;a href="https://helm.sh/docs/topics/plugins/#environment-variables"&gt;environment variables&lt;/a&gt; that can simplify a lot of the complex logic and provides much of the necessary information and context for the plugin’s execution. &lt;/li&gt;
&lt;li&gt;The &lt;code&gt;useTunnel&lt;/code&gt; flag in &lt;code&gt;config.yaml&lt;/code&gt; is deprecated in Helm V3 and is no longer needed&lt;/li&gt;
&lt;li&gt;To easily test the plugin during development, you can install the plugin from the dev directory by running &lt;code&gt;helm plugin install PATH_TO_DIR&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;The plugin can then be uninstalled with &lt;code&gt;helm plugin uninstall PLUGIN_NAME&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Helm plugin writing is easy to learn but difficult to master. Lack of documentation makes writing complex plugins an arduous task. &lt;/p&gt;

&lt;p&gt;This article aims to expose some of the lesser known abilities of the Helm plugin system and to provide tools and scaffolding that remove the limitations of the plugin system and allow execution of as complex a business logic as is necessary to extend Helm’s behavior.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>helm</category>
      <category>devops</category>
    </item>
    <item>
      <title>A Deep Dive Into Kubernetes Schema Validation</title>
      <dc:creator>Eyar Zilberman</dc:creator>
      <pubDate>Tue, 01 Jun 2021 10:24:16 +0000</pubDate>
      <link>https://dev.to/datreeio/a-deep-dive-into-kubernetes-schema-validation-39ll</link>
      <guid>https://dev.to/datreeio/a-deep-dive-into-kubernetes-schema-validation-39ll</guid>
      <description>&lt;h2&gt;
  
  
  Why run schema validation?
&lt;/h2&gt;

&lt;p&gt;How do you ensure the stability of your Kubernetes clusters? How do you know that your manifests are syntactically valid? Are you sure you don’t have any invalid data types? Are any mandatory fields missing? &lt;/p&gt;

&lt;p&gt;Most often, we only become aware of these misconfigurations at the worst time - when trying to deploy the new manifests. &lt;/p&gt;

&lt;p&gt;Specialized tools and a “shift-left” approach make it possible to verify a Kubernetes schema before they’re applied to a cluster. In this article, I'll address how you can avoid misconfigurations and which tools are best to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;Running schema validation tests is important, and the sooner the better.&lt;/p&gt;

&lt;p&gt;If all machines (local developers environment, CI, etc.) have access to your Kubernetes cluster, run &lt;code&gt;kubectl --dry-run&lt;/code&gt; in server mode on every code change. If this isn’t possible, and you want to perform schema validation tests offline, use kubeconform together with a policy enforcement tool to have optimal validation coverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Available tools
&lt;/h2&gt;

&lt;p&gt;Verifying the state of Kubernetes manifests may seem like a trivial task, because the Kubernetes CLI (kubectl) has the ability to verify resources before they’re applied to a cluster.  You can verify the schema by using the &lt;a href="https://github.com/kubernetes/enhancements/blob/master/keps/sig-api-machinery/576-dry-run/README.md" rel="noopener noreferrer"&gt;dry-run&lt;/a&gt; flag (&lt;code&gt;--dry-run=client/server&lt;/code&gt;) when specifying the &lt;code&gt;kubectl create&lt;/code&gt; or &lt;code&gt;kubectl apply&lt;/code&gt; commands, which will perform the validation without applying Kubernetes resources to the cluster.&lt;/p&gt;

&lt;p&gt;But I can assure you that it’s actually more complex. A running Kubernetes cluster is required to obtain the schema for the set of resources being validated. So, when incorporating manifest verification into a CI process, you must also manage connectivity and credentials to perform the validation. This becomes even more challenging when dealing with multiple microservices in several environments (prod, dev, etc.).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/instrumenta/kubeval/tree/master/kubeval" rel="noopener noreferrer"&gt;Kubeval&lt;/a&gt; and &lt;a href="https://github.com/yannh/kubeconform" rel="noopener noreferrer"&gt;kubeconform&lt;/a&gt; are command-line tools that were developed with the intent to validate Kubernetes manifests without the requirement of having a running Kubernetes environment. Because kubeconform was inspired by kubeval, they operate similarly — verification is performed against pre-generated JSON schemas that are created from the OpenAPI specifications (&lt;a href="https://raw.githubusercontent.com/kubernetes/kubernetes/master/api/openapi-spec/swagger.json" rel="noopener noreferrer"&gt;swagger.json&lt;/a&gt;) for each particular Kubernetes version. All that remains &lt;a href="https://github.com/datreeio/kubernetes-schema-validation#running-schema-validation-tests" rel="noopener noreferrer"&gt;to run&lt;/a&gt; the schema validation tests is to point the tool executable to a single manifest, directory or pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6dd2b38kr7o57z9fxv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6dd2b38kr7o57z9fxv9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;kubeval&lt;/li&gt;
&lt;li&gt;kubeconform&lt;/li&gt;
&lt;li&gt;kubectl dry-run in ‘client’ mode&lt;/li&gt;
&lt;li&gt;kubectl dry-run in ‘server’ mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we covered the tools that are available for Kubernetes schema validation, let’s compare some core abilities (misconfigurations coverage, speed test, different versions support, CRD support and docs).&lt;/p&gt;

&lt;h3&gt;
  
  
  Misconfigurations coverage&lt;sup id="fnref1"&gt;1&lt;/sup&gt;
&lt;/h3&gt;

&lt;p&gt;I donned my QA hat and generated some (basic) Kubernetes manifest files with some &lt;a href="https://github.com/datreeio/kubernetes-schema-validation/tree/main/misconfigs" rel="noopener noreferrer"&gt;intended misconfigurations&lt;/a&gt;, and then ran it against all four tools&lt;sup id="fnref2"&gt;2&lt;/sup&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Misconfig/Tool&lt;/th&gt;
&lt;th&gt;kubeval / kubeconform&lt;/th&gt;
&lt;th&gt;kubectl dry-run in ‘client’ mode&lt;/th&gt;
&lt;th&gt;kubectl dry-run in ‘server’ mode&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#api-deprecationyaml" rel="noopener noreferrer"&gt;API deprecation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#invalid-kind-valueyaml" rel="noopener noreferrer"&gt;Invalid kind value&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;🚧 Caught&lt;sup id="fnref3"&gt;3&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#invalid-label-valueyaml" rel="noopener noreferrer"&gt;Invalid label value&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#invalid-protocol-typeyaml" rel="noopener noreferrer"&gt;Invalid protocol type&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#invalid-spec-keyyaml" rel="noopener noreferrer"&gt;Invalid spec key&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#missing-imageyaml" rel="noopener noreferrer"&gt;Missing image&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;❌ Didn't catch&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/datreeio/kubernetes-schema-validation#wrong-k8s-indentationyaml" rel="noopener noreferrer"&gt;Wrong K8s indentation&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;td&gt;✅ Caught&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Conclusion: Running kubectl dry-run in ‘server’ mode caught all misconfigurations, while kubeval/kubeconform missed two of them. It’s also interesting to see that running kubectl dry-run in ‘client’ mode is almost useless because it’s missing some obvious misconfigurations, and also requires a connection to a running Kubernetes environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark speed test
&lt;/h3&gt;

&lt;p&gt;I used &lt;a href="https://github.com/sharkdp/hyperfine" rel="noopener noreferrer"&gt;hyperfine&lt;/a&gt; to benchmark the execution time of each tool&lt;sup id="fnref4"&gt;4&lt;/sup&gt;. First I ran it against &lt;a href="https://github.com/datreeio/kubernetes-schema-validation/tree/main/misconfigs" rel="noopener noreferrer"&gt;(1)&lt;/a&gt; all the files with misconfigurations (seven files in total), and then I ran it against &lt;a href="https://github.com/datreeio/kubernetes-schema-validation/tree/main/benchmark" rel="noopener noreferrer"&gt;(2)&lt;/a&gt; 100 Kubernetes files (all the files contain the same config).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(1)&lt;/strong&gt; Results for running the tools against seven files with different Kubernetes schema misconfigurations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazbwpj5ppcf92pku1qiw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazbwpj5ppcf92pku1qiw.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(2)&lt;/strong&gt; Results for running the tools against 100 files with valid Kubernetes schemas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid0o1tf7t5ytg6vyqwdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid0o1tf7t5ytg6vyqwdc.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion: We can see that while &lt;code&gt;kubeconform&lt;/code&gt; (#1), &lt;code&gt;kubeval&lt;/code&gt; (#2) and &lt;code&gt;kubectl --dry-run=client&lt;/code&gt; (#3) are providing fast results on both tests, while &lt;code&gt;kubectl --dry-run=server&lt;/code&gt; (#4) is working slower, especially when it needs to evaluate 100 files — 60 seconds for generating a result is still a good outcome in my opinion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes versions support
&lt;/h3&gt;

&lt;p&gt;Both kubeval and kubeconform accept the Kubernetes schema version as a flag. Although both tools are similar (as mentioned, kubeconfrom is based on kubeval), one of the key differences between them is that each tool relies on its own set of pre-generated JSON schemas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubeval&lt;/strong&gt; - &lt;a href="https://github.com/instrumenta/kubernetes-json-schema" rel="noopener noreferrer"&gt;instrumenta/kubernetes-json-schema&lt;/a&gt; &lt;em&gt;(last commit: 133f848 on April 29, 2020)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubeconform&lt;/strong&gt; - &lt;a href="https://github.com/yannh/kubernetes-json-schema" rel="noopener noreferrer"&gt;yannh/kubernetes-json-schema&lt;/a&gt; &lt;em&gt;(last commit: a660f03 on May 15, 2021)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As of today (May 2021), kubeval only supports Kubernetes schema versions up to 1.18.1, while kubeconform supports the latest Kubernetes schema available today — 1.21.0. With kubectl, it’s a little bit trickier. I don’t know which version of kubectl introduced the dry-run, but I tried it with Kubernetes version 1.16.0 and it still worked, so I know it’s available in Kubernetes versions 1.16.0-1.18.0.&lt;/p&gt;

&lt;p&gt;The variety of Kubernetes schemas support is especially important if you want to migrate to a new Kubernetes version. With kubeval and kubeconform you can set the version and start the process of evaluating which configurations must be changed to support the cluster upgrade.&lt;/p&gt;

&lt;p&gt;Conclusion: The fact that kubeconform has all the schemas for all the different Kubernetes versions available — and also doesn’t require minikube setup (as kubectl does) — makes it a superior tool when comparing these capabilities to its alternatives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other things to consider
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Custom Resource Definition (CRD) support&lt;/strong&gt;&lt;br&gt;
Both kubectl dry-run and kubeconform support resource type &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;CRD&lt;/a&gt;, while kubeval does not. According to kubeval docs, you can pass a flag to kubeval to ignore missing schemas, so it will not fail when testing a bunch of manifests for which only some are resource type CRD.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt; &lt;br&gt;
Kubeval is a more popular project than kubeconform, and therefore, its community and &lt;a href="https://kubeval.instrumenta.dev/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; are more extensive. Kubeconform doesn't have official docs but it does have a well-written &lt;a href="https://github.com/yannh/kubeconform/blob/master/Readme.md" rel="noopener noreferrer"&gt;README&lt;/a&gt; file that explains pretty well its capabilities. The interesting part is that although Kubernetes native tools, like kubectl, are usually well-documented, it was really hard to find the necessary information needed to understand how the &lt;code&gt;dry-run&lt;/code&gt; flag actually works and its limitations.&lt;/p&gt;

&lt;p&gt;Conclusion: Although it’s not as famous as kubeval, the CRD support and good-enough documentation make kubeconform the winner in my opinion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item/Tool&lt;/th&gt;
&lt;th&gt;kubeval&lt;/th&gt;
&lt;th&gt;kubeconform&lt;/th&gt;
&lt;th&gt;dry-run client&lt;/th&gt;
&lt;th&gt;dry-run server&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Misconfigurations coverage&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Benchmark speed test&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kubernetes versions support&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CRD support&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Documentation&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+/-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Now that you know the pros and cons associated with each tool, here are some best practices for how to best leverage them within your Kubernetes production-scale development flow. &lt;/p&gt;

&lt;h2&gt;
  
  
  Strategies for validating Kubernetes schema using these tools
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;⬅️ Shift-left: When possible, the best setup is if you can run &lt;code&gt;kubectl --dry-run=server&lt;/code&gt; on every code change, but you probably can’t do it because you can’t allow every developer or CI machine in your organization to have a connection to your cluster. So, the second-best effort is to run kubeconform. &lt;/li&gt;
&lt;li&gt;🚔 Because kubeconform doesn’t cover all common misconfigurations, it’s recommended to run it with a policy enforcement tool on every code change to fill the coverage gap.&lt;/li&gt;
&lt;li&gt;💸 Buy vs. build: If you enjoy the &lt;a href="https://jrott.com/posts/why-buy/" rel="noopener noreferrer"&gt;engineering overhead&lt;/a&gt;, then kubeconform + &lt;a href="https://www.conftest.dev/" rel="noopener noreferrer"&gt;conftest&lt;/a&gt; is a great combination of tools to get good coverage. Alternatively, there are tools that can provide you with an out-of-the-box experience to help you save time and resources, such as &lt;a href="https://hub.datree.io/schema-validation/?utm_source=dev.to&amp;amp;utm_medium=schema-validation"&gt;Datree&lt;/a&gt;&lt;sup id="fnref5"&gt;5&lt;/sup&gt; (whose schema validation is powered by kubeconform).&lt;/li&gt;
&lt;li&gt;🚀 During the CD step, it shouldn’t be a problem to have a connection with your cluster, so you should always run &lt;code&gt;kubectl --dry-run=server&lt;/code&gt; before deploying your new code changes. &lt;/li&gt;
&lt;li&gt;👯 Another option for using kubectl dry-run in server mode, without having a connection to your Kubernetes environment, is to run minikube + &lt;code&gt;kubectl --dry-run=server&lt;/code&gt;. The downside of this hack is that it’s also required to set up the minikube cluster like prod (same volumes, namespace, etc.) or you’ll encounter errors when trying to validate your Kubernetes manifests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  GRATITUDE
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Thank you to &lt;a href="https://github.com/yannh" rel="noopener noreferrer"&gt;Yann Hamon&lt;/a&gt; for creating kubeconform - it’s awesome!&lt;/em&gt;&lt;br&gt;
&lt;em&gt;This article wouldn’t be possible without you. Thank you for all of your guidance.&lt;/em&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;All the schemas validation tests performed against Kubernetes version 1.18.0 ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Because kubeconform is based on kubeval, they provide the same result and run them against the files with the misconfigurations. kubectl is one tool but each mode (client or server) produces a different result as you can see from the table ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Server mode didn’t mark the file as valid (exit code 1) but the error message is wrong: &lt;code&gt;Kind=pod doesn't support dry-run&lt;/code&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;All benchmark test performed on my MacBook Pro with a 2.3 GHz Quad-Core Intel Core i7 processor ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Disclaimer - self-promotion here :) ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>gitops</category>
    </item>
    <item>
      <title>10 insanely useful Git commands you wish existed – and their alternatives</title>
      <dc:creator>Eyar Zilberman</dc:creator>
      <pubDate>Tue, 23 Apr 2019 13:43:50 +0000</pubDate>
      <link>https://dev.to/datreeio/10-insanely-useful-git-commands-you-wish-existed-and-their-alternatives-8e6</link>
      <guid>https://dev.to/datreeio/10-insanely-useful-git-commands-you-wish-existed-and-their-alternatives-8e6</guid>
      <description>&lt;h2&gt;
  
  
  There’s a git command for that
&lt;/h2&gt;

&lt;p&gt;Git commands aren’t always intuitive. If they were, we would have these 10 commands at our disposal. They would be super useful for accomplishing common tasks like creating or renaming a git branch, removing files, and undoing changes.&lt;/p&gt;

&lt;p&gt;For each git command in our wishlist, we’ll show you the commands that actually exist and you can use to accomplish the same tasks. If you’re still learning Git, this list reads like a tutorial and is worth keeping as a cheatsheet.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 9 – git create branch: create a new branch with git checkout
&lt;/h2&gt;

&lt;p&gt;The fastest way to create a new branch is to actually do it from the git terminal. This way you don’t have to use GitHub UI, for example, if you use GitHub for version control.&lt;/p&gt;

&lt;p&gt;This command actually exists in git, only in a different name – &lt;code&gt;$ git checkout&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to create a branch with git checkout:
&lt;/h3&gt;

&lt;p&gt;One-line command: &lt;code&gt;$ git checkout -b &amp;lt;branch-name&amp;gt; master&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git checkout -b feature-branch master
-&amp;gt; commit-test git:(feature-branch) $
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Just like with &lt;a href="https://datree.io/blog/git-commit-message-conventions-for-readable-git-log/"&gt;commit messages&lt;/a&gt;, having a naming convention for git &lt;a href="https://stackoverflow.com/questions/273695/what-are-some-examples-of-commonly-used-practices-for-naming-git-branches"&gt;branches&lt;/a&gt; is a good best practice to adopt.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 8 – git force pull: overwrite local with git pull
&lt;/h2&gt;

&lt;p&gt;You find out you’ve made changes that seemingly conflict with the upstream changes. At this point, you decide to overwrite your changes instead of keeping them, so you do a &lt;code&gt;$ git pull&lt;/code&gt; and you get this error message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git pull
Updating db40e41..2958dc6
error: Your local changes to the following files would be overwritten by merge:
README.md
hint: Please, commit your changes before merging.
fatal: Exiting because of unfinished merge.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to overwrite local changes with git pull:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Stash local changes: &lt;code&gt;$ git stash&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Pull changes from remote: &lt;code&gt;$ git pull&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git stash
Updating db40e41..2958dc6
Saved working directory and index state WIP on master: d8fde76 fix(API): remove ‘test’ end-point
-&amp;gt; commit-test git:(master) $ git pull
Auto-merging README.md
Merge made by the ‘recurive’ strategy.
README.md     | 1 +
ENDPOINT.js    | 3 ++–
2 files changes, 3 insertions(+), 1 deletions(-)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: If you want to retrieve your changes just do: &lt;code&gt;$ git stash apply&lt;/code&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  # 7 – git remove untracked files: delete untracked files from working tree
&lt;/h2&gt;

&lt;p&gt;When having unnecessary files and dirs in your own local copy of a repository, and you want to delete those files, in opposed to just ignore them (with .gitignore), you can use git clean to remove all files which are not tracked by git.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to remove untracked files and dirs:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Start with a dry-run to see what will be deleted: &lt;code&gt;$ git clean -n -d&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After you are sure, run the git clean command with “-f” flag: &lt;code&gt;$ git clean -f -d&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git clean -n -d
Would remove dontTrackDir/untracked_file1.py
Would remove untracked_file2.py
-&amp;gt; commit-test git:(master) $ git clean -f -d
Removing dontTrackDir/untracked_file1.py
Removing untracked_file2.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Instead of untracking files, a good practice is to prevent those files from being tracked in the first place by &lt;a href="https://docs.datree.io/docs/include-mandatory-files-gitignore"&gt;using .gitignore&lt;/a&gt; file.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 6 – git unstage: unstage file(s) from index
&lt;/h2&gt;

&lt;p&gt;When you’re adding files (&lt;code&gt;$ git add&lt;/code&gt;) to the working tree, you are adding them to the staging area, meaning you are staging them. If you want Git to stop tracking specific files on the working tree, you need to remove them from your stage files (.git/index).&lt;/p&gt;

&lt;h3&gt;
  
  
  How to unstage file(s) from index:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Keep the file but remove it from the index: &lt;code&gt;$ git rm --cached &amp;lt;file-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git rm –cached unstageMe.js
rm unstageMe.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To leave the entire working tree untouched, unstage all files (clear your index): &lt;code&gt;$ git reset&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git reset
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: you can also untrack files which already added to git repository &lt;a href="http://www.codeblocq.com/2016/01/Untrack-files-already-added-to-git-repository-based-on-gitignore/"&gt;based on .gitignore&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 5 – git undo merge: abort (cancel) a merge after it happened
&lt;/h2&gt;

&lt;p&gt;Sometimes you get in a situation (we’ve all been there) where you merged branches and realize you need to undo the merge because you don’t want to release the code you just merged.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to abort (cancel) a merge and maintain all committed history:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Checkout to the master branch: &lt;code&gt;$ git checkout master&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run git log and get the id of the merge commit: &lt;code&gt;$ git log --oneline&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Revert merge by commit id: &lt;code&gt;$ git revert -m 1 &amp;lt;merge-commit-id&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit the revert and push changes to the remote repo. You can start putting on your poker face and pretend “nothing’s happened”.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(master) $ git log –oneline
812d761 Merge pull request #524 from datreeio/DAT-1332-resolve-installation-id
b06dee0 feat: added installation event support
8471b2b fix: get organization details from repository object

-&amp;gt; commit-test git:(master) $ git revert -m 1 812d761
Revert “Merge pull request #524 from datreeio/DAT-1332-resolve-installation-id”
[master 75b85db] Revert “Merge pull request #524 from datreeio/DAT-1332-resolve-installation-id”
1 file changed, 1 deletion(-)
-&amp;gt; commit-test git:(master) $ git commit -m “revert merge #524”
-&amp;gt; commit-test git:(master) $ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Instead of reverting merge, working with pull requests and setting up or improving your &lt;a href="https://phauer.com/2018/code-review-guidelines/"&gt;code review&lt;/a&gt; process can lower the possibility of a faulty merge.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 4 – git remove file: remove file(s) from a commit on remote
&lt;/h2&gt;

&lt;p&gt;You wish to delete a file (or files) on remote, maybe because it is deprecated or because this file not supposed to be there in the first place. So, you wonder, what is the protocol to delete files from a remote git repository?&lt;/p&gt;

&lt;h3&gt;
  
  
  How to remove file(s) from commit:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Remove your file(s): &lt;code&gt;$ git rm &amp;lt;file-A&amp;gt; &amp;lt;file-B&amp;gt; &amp;lt;file-C&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit your changes: &lt;code&gt;$ git commit -m "removing files"&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Push your changes to git: &lt;code&gt;$ git push&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(delete-files) $ git rm deleteMe.js
rm ‘deleteMe.js’
-&amp;gt; commit-test git:(delete-files) $ git commit -m “removing files”
[delete-files 75e998e] removing files
1 file changed, 2 deletions(-)
delete mode 100644 deleteMe.js
-&amp;gt; commit-test git:(delete-files) $ git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: When a file is removed from Git, it doesn’t mean it is removed from history. The file will keep “living” in the repository history until the file will be &lt;a href="https://help.github.com/en/articles/removing-sensitive-data-from-a-repository"&gt;completely deleted&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 3 – git uncommit: undo the last commit
&lt;/h2&gt;

&lt;p&gt;You made a commit but now you regret it. Maybe you &lt;a href="https://datree.io/blog/secrets-management-git-version-control/"&gt;committed secrets&lt;/a&gt; by accident – not a good idea – or maybe you want to add more tests to your code changes. These are all legit reasons to undo your last commit.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to uncommit (undo) the last commit:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To keep the changes from the commit you want to undo: &lt;code&gt;$ git reset --soft HEAD^&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;To destroy the changes from the commit you want to undo: &lt;code&gt;$ git reset --hard HEAD^&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(undo-commit) $ git commit -m “I will regret this commit”
[undo-commit a7d8ed4] I will regret this commit
1 file changed, 1 insertion(+)
-&amp;gt; commit-test git:(undo-commit) $ git reset –soft HEAD^
-&amp;gt; commit-test git:(undo-commit) $ git status
On branch undo-commit
Changes to be committed:
(use “git reset HEAD &amp;lt;file&amp;gt;…” to unstage)

    modified: README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Git &lt;a href="https://gist.github.com/eyarz/64770d343b9cab442b257869f497af9e"&gt;pre-commit hook&lt;/a&gt; is a built-in feature that lets you define scripts that will run automatically before each commit. Use it to reduce the need to cancel commits.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 2 – git diff between branches
&lt;/h2&gt;

&lt;p&gt;When you are working with multiple git branches, it’s important to be able to compare and contrast the differences between two different branches on the same repository. You can do this using the &lt;code&gt;$ git diff&lt;/code&gt; command.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to get the diff between two branches:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Find the diff between the tips of the two branches: &lt;code&gt;$ git diff branch_1..branch_2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Produce the diff between two branches from common ancestor commit: &lt;code&gt;$ git diff branch_1...branch_2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Comparing files between branches: &lt;code&gt;$ git diff branch1:file branch2:file&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(diff-me) $ git diff master..diff-me
diff –git a/README.md b/README.md
index b74512d..da1e423 100644
— a/README.md
+++ b/README.md
@@ -1,2 +1,3 @@
# commit-test
-Text on “master” branch
+Text on “diff-me” branch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: &lt;a href="https://github.com/so-fancy/diff-so-fancy"&gt;diff-so-fancy&lt;/a&gt; is a great open source solution to make your diffs human readable.&lt;/p&gt;




&lt;h2&gt;
  
  
  # 1 – git delete tag: remove a tag from branch
&lt;/h2&gt;

&lt;p&gt;In the case of a “buggy” release, you probably don’t want someone to accidentally use the release linked to this tag. The best solution is to delete the tag and remove the connection between a release and its co-related tag.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to delete tag by removing it from branch:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;If you have a remote tag  to delete, and your remote is origin, then simply: &lt;code&gt;$ git push origin :refs/tags/&amp;lt;tag-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;If you also need to delete the tag locally: &lt;code&gt;$ git tag -d &amp;lt;tag-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(delete-tag) $ git push origin :refs/tags/v1.0.0
To github.com:datreeio/commit-test.git
– [deleted]         v1.0.0
-&amp;gt; commit-test git:(delete-tag) $ git tag -d v1.0.0
Deleted tag ‘v1.0.0’ (was af4d0ea)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Not sure when or why to use tags? &lt;a href="https://medium.com/@kevinkreuzer/the-way-to-fully-automated-releases-in-open-source-projects-44c015f38fd6"&gt;Read here&lt;/a&gt; to learn more (TL;DR: automatic releasing)&lt;/p&gt;




&lt;h2&gt;
  
  
  # 0 – git rename branch: change branch name
&lt;/h2&gt;

&lt;p&gt;As I mentioned, having a branch naming convention a good practice and should be adopted as part of your coding standards, and it is especially useful in supporting automation of git workflows. But what to do when you find out your branch name is not aligned with the convention, after already pushing code to the branch? Don’t worry, you can still rename your branch.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to rename branch name after it was created:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Checkout to the branch you need to rename: &lt;code&gt;$ git checkout &amp;lt;old-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Rename branch name locally: &lt;code&gt;$ git branch -m &amp;lt;new-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Delete old branch from remote: &lt;code&gt;$ git push origin :&amp;lt;old-name&amp;gt; &amp;lt;new-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Reset the upstream branch for the new branch name: &lt;code&gt;$ git push origin -u &amp;lt;new-name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-&amp;gt; commit-test git:(old-name) $ git branch -m new-name
-&amp;gt; commit-test git:(new-name) $ git push origin :old-name new-name
Total 0 (delta 0), reused 0 (delta 0)
To github.com:datreeio/commit-test.git
–  [deleted]             old-name
* [new branch]      new-name -&amp;gt; new-name
-&amp;gt; commit-test git:(new-name) $ git push origin -u new-name
Branch new-name set up to track remote branch new-name from origin.
Everything up-to-date
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Git tip&lt;/strong&gt;: Want to make sure all branch names will always follow your convention? Set a Git-enforced &lt;a href="https://docs.datree.io/docs/branch-name"&gt;branch naming policy&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;Found some useful commands? you can also set &lt;a href="https://github.com/GitAlias/gitalias"&gt;alias commands&lt;/a&gt; for them! &lt;br&gt;
Relevant alias to this blog post are provided by &lt;a class="comment-mentioned-user" href="https://dev.to/mfrata"&gt;@mfrata&lt;/a&gt;
 in &lt;a href="https://dev.to/mfrata/comment/aab8"&gt;his comment&lt;/a&gt; - you can thank him :)&lt;/p&gt;




&lt;p&gt;Originally posted on &lt;a href="https://datree.io/resources/git-commands"&gt;https://datree.io/resources/git-commands&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>top10</category>
    </item>
    <item>
      <title>Top 10 GitHub Best Practices</title>
      <dc:creator>Eyar Zilberman</dc:creator>
      <pubDate>Mon, 15 Apr 2019 13:28:03 +0000</pubDate>
      <link>https://dev.to/datreeio/top-10-github-best-practices-3kl2</link>
      <guid>https://dev.to/datreeio/top-10-github-best-practices-3kl2</guid>
      <description>&lt;p&gt;After scanning thousands of repositories and interviewing hundreds of GitHub, I created a list of common best practices which are strongly recommended to be adopted in every modern software development organization which is using GitHub to store their code:&lt;/p&gt;

&lt;h2&gt;
  
  
  9. 🚧 Protect the main branches from direct commits
&lt;/h2&gt;

&lt;p&gt;Anything in the master branch should always be deployable, that’s why you should never commit to the default branches directly and why &lt;a href="https://nvie.com/posts/a-successful-git-branching-model/" rel="noopener noreferrer"&gt;Gitflow workflow&lt;/a&gt; has become the standard. Using &lt;a href="https://help.github.com/articles/configuring-protected-branches/" rel="noopener noreferrer"&gt;branch protection&lt;/a&gt; can help you prevent direct commits and of course, everything should be managed via pull requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-branch-protection.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-branch-protection.gif" alt="branch protection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  8. 👻 Avoid unrecognized committers
&lt;/h2&gt;

&lt;p&gt;Maybe you are working on a new environment, or you didn’t notice that your &lt;a href="https://help.github.com/articles/why-are-my-commits-linked-to-the-wrong-user/" rel="noopener noreferrer"&gt;Git configuration is incorrect&lt;/a&gt; which causes the user to commit code with the wrong email address. Now, their commit is not associated with the right user and makes it nearly impossible to trace back who wrote what.&lt;/p&gt;

&lt;p&gt;Make sure that your &lt;a href="https://help.github.com/articles/setting-your-commit-email-address-in-git/" rel="noopener noreferrer"&gt;Git client is configured&lt;/a&gt; with the correct email address and linked to your GitHub user. Check your pull requests during code review for unrecognized commits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Funrecognized-commits.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Funrecognized-commits.jpg" alt="unrecognized committer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. 🎩 Define CODEOWNERS for each repository
&lt;/h2&gt;

&lt;p&gt;Using the &lt;a href="https://help.github.com/articles/about-codeowners/" rel="noopener noreferrer"&gt;CODEOWNERS&lt;/a&gt; feature allows you to define which teams and people are automatically selected as reviewers for the repository. This ability automatically requests a review from the repository owners. Nowadays organizations have dozens if not hundreds of repositories and CODEOWNERS gives the option to define who the repo maintainers are across your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-code-owners-datree.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-code-owners-datree.png" alt="codeowners"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. 🙊 Separate secret credentials from source code
&lt;/h2&gt;

&lt;p&gt;When building a Cloud Native app, there are many secrets — account passwords, API keys, private tokens, and SSH keys — that we safeguard. NEVER! commit any secrets into your code. Instead, use environment variables that are injected externally from a secure store.&lt;/p&gt;

&lt;p&gt;You can use tools like &lt;a href="https://www.vaultproject.io/" rel="noopener noreferrer"&gt;Vault&lt;/a&gt; and &lt;a href="https://aws.amazon.com/secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt; to help with your secret management in production.&lt;/p&gt;

&lt;p&gt;There are lots of great tools to identify existing secrets in your code and prevent new ones. For example, &lt;a href="https://dev.toGit-secrets"&gt;Git-secrets&lt;/a&gt; can help you to identify passwords in your code. With &lt;a href="https://githooks.com/" rel="noopener noreferrer"&gt;Git Hooks&lt;/a&gt; you can build a &lt;a href="https://github.com/git/git/blob/master/templates/hooks--pre-commit.sample" rel="noopener noreferrer"&gt;pre-commit hook&lt;/a&gt; and check every pull request for secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-secrets-in-code-datree.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-secrets-in-code-datree.png" alt="secrets in code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. ⛔ Avoid committing dependencies into your project
&lt;/h2&gt;

&lt;p&gt;Pushing dependencies into your remote origin will increase repository size. Remove any projects dependencies included in your repositories and let your package manager download them in each build. if you are afraid of “dependencies availability” you should consider using a binary repository manager solution like &lt;a href="https://jfrog.com/" rel="noopener noreferrer"&gt;Jfrog&lt;/a&gt; or &lt;a href="https://www.sonatype.com/nexus-repository-sonatype" rel="noopener noreferrer"&gt;Nexus Repository&lt;/a&gt;. Also, check out &lt;a href="https://github.com/github/git-sizer" rel="noopener noreferrer"&gt;Git-Sizer&lt;/a&gt; by GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. 🔧 Separate configuration files from source code
&lt;/h2&gt;

&lt;p&gt;We strongly recommend against committing your local config files to version control. Usually, those are private configuration files which you don’t want to push to remote because they are holding secrets, personal preferences, history or general information which should stay only in your local environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  📤 3. Create a meaningful .gitignore file for your projects
&lt;/h2&gt;

&lt;p&gt;A .gitignore file is a must in each repository to ignore predefined files and directories. It will help you to prevent secret keys, dependencies and many other possible discrepancies in your code. You can choose a relevant template from &lt;a href="https://www.gitignore.io/" rel="noopener noreferrer"&gt;Gitignore.io&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgitignore.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgitignore.gif" alt="gitignore"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. 💀 Archive dead repositories
&lt;/h2&gt;

&lt;p&gt;Over time, for various reasons, we find ourself with unmaintained repositories. Maybe you opened a new repository for an ad hoc use case (or you wanted to POC a new tech) or you have some repositories with old and irrelevant code. The problem is the same – those repositories are not actively developed anymore after they served their purpose, so you don’t want to maintain them or other people will rely on/use them. The best practice will always be to archive those repositories which will make them “read-only” to everyone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-archive-repo.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-archive-repo.gif" alt="archive repository"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. 🔒 Lock package version
&lt;/h2&gt;

&lt;p&gt;Your manifest file holds the information about all of your packages versions to maintain consistent results without breaking your code every time you install your app dependencies. The best practice is to use a manifest lock file to avoid any discrepancies and confirm that you are getting the same packages version each time. The opposite being that you leave your code component version imprecise, are uncertain which version will be installed on the next build and your code may break.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fkoa-latest-version.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fkoa-latest-version.gif" alt="lock version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  0. ♻️ Align packages versioning
&lt;/h2&gt;

&lt;p&gt;Although you are using the same package, a different version distribution will make it harder to reuse code and tests in various projects.&lt;/p&gt;

&lt;p&gt;If you have a package which is used in multiple projects, try at a minimum to use the same major version across the different repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-version-distribution-datree-catalog.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdatree.io%2Fwp-content%2Fuploads%2F2018%2F10%2Fgithub-version-distribution-datree-catalog.jpg" alt="align version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What’s next?
&lt;/h2&gt;

&lt;p&gt;All that’s left for you to do is check off each of the aforementioned best practices, on each of your repositories, one by one.&lt;/p&gt;

&lt;p&gt;Or, save your sanity and connect with &lt;a href="https://github.com/marketplace/datree/?source=dev.to"&gt;Datree’s GitHub app&lt;/a&gt; (it's even free!) to scan your repositories and generate your free status report to assess if your repositories align with the listed best practices.&lt;/p&gt;

</description>
      <category>github</category>
      <category>bestpractices</category>
      <category>top10</category>
    </item>
    <item>
      <title>A guide to GitHub Actions using Node.js for Git workflow automation</title>
      <dc:creator>Roman Labunsky</dc:creator>
      <pubDate>Sun, 31 Mar 2019 18:09:26 +0000</pubDate>
      <link>https://dev.to/datreeio/a-guide-to-github-actions-using-node-js-for-git-workflow-automation-43bc</link>
      <guid>https://dev.to/datreeio/a-guide-to-github-actions-using-node-js-for-git-workflow-automation-43bc</guid>
      <description>&lt;h2&gt;
  
  
  What is Github Actions?
&lt;/h2&gt;

&lt;p&gt;GitHub Actions, a feature announced in October 2018 during GitHub Universe, generated immense hype under the apt positioning as the “swiss army knife” of git workflow automation.&lt;/p&gt;

&lt;p&gt;Github Actions allows developers to perform tasks automatically in a Github workflow, such as pushing commits to a repository, deploying a release to staging, running tests, removing feature flags, and so on, by way of a simple text file.&lt;/p&gt;

&lt;p&gt;&lt;a href="http://chewy.com/" rel="noopener noreferrer"&gt;Chewy.com&lt;/a&gt;, for example, demoed an action that checks if a &lt;a href="https://vimeo.com/295656803" rel="noopener noreferrer"&gt;Jira ticket number is included&lt;/a&gt; in every pull request name among other things, to ensure the code being deployed to production is compliant with their policies and best practices.&lt;/p&gt;

&lt;h3&gt;
  
  
  “With GitHub Actions, you can automate your workflow from idea to production.”
&lt;/h3&gt;

&lt;p&gt;– &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub actions page&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Write Github Actions in Node.js
&lt;/h2&gt;

&lt;p&gt;I started tinkering with the feature as soon as I could get access to the private beta. I noticed that most Actions are written in shell script. GitHub itself promoted writing actions in &lt;a href="https://developer.github.com/actions/creating-github-actions/creating-a-new-action/#using-shell-scripts-to-create-actions" rel="noopener noreferrer"&gt;shell script for simple actions&lt;/a&gt;. While I understand the motivation (so you can quickly and easily start writing Actions), I feel that shell scripts are limited in terms of writing full-fledged software.&lt;/p&gt;

&lt;p&gt;Since I generally work in Node.js, I decided to ‘accept the challenge’ and write my first Action in Node. But, after struggling with the specifics of running Node.js actions and understanding the differences between a simple container and the GitHub execution environment, I decided to write this tutorial in the hope that it will help others who prefer to write an action in JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Environment setup
&lt;/h2&gt;

&lt;p&gt;The basic setup is straightforward – start by following &lt;a href="https://developer.github.com/actions/creating-github-actions/creating-a-new-action/#creating-a-new-github-action" rel="noopener noreferrer"&gt;GitHub’s tutorial&lt;/a&gt; all the way to the &lt;em&gt;entrypoint.sh&lt;/em&gt; section. The main difference up to this point is in the chosen docker image. I suggest using the &lt;a href="https://github.com/nodejs/docker-node/tree/90043cdde5057865b94fec447ce193fb46b69e18#nodealpine" rel="noopener noreferrer"&gt;alpine image&lt;/a&gt;, it’s very lightweight compared to the regular &lt;a href="https://github.com/nodejs/docker-node/tree/90043cdde5057865b94fec447ce193fb46b69e18#nodealpine" rel="noopener noreferrer"&gt;node image&lt;/a&gt;. In any case, I suggest using an LTS variant, currently being node 10.&lt;/p&gt;

&lt;p&gt;A very important tool, one that helped reduce the development cycle from 5 minutes per iteration to mere seconds is &lt;a href="https://github.com/nektos/act" rel="noopener noreferrer"&gt;Act&lt;/a&gt;, a zero-config, easy to use tool to run actions locally. It doesn’t fully replicate the environment (for obvious reasons, it doesn’t provide a GitHub token, more on that later) but it’s close enough to speed up the development process and test your action locally.&lt;/p&gt;

&lt;p&gt;At the end of this step, you should have a Dockerfile that looks like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Node-specific setup
&lt;/h2&gt;

&lt;p&gt;This is the basic &lt;em&gt;entrypoint.sh&lt;/em&gt; file that will work for a Node.js action:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I chose &lt;em&gt;npm ci&lt;/em&gt; because it’s the easiest way to make sure you always get the same versions of the packages you want to install. It requires you to have &lt;em&gt;package.json&lt;/em&gt; and &lt;em&gt;package-lock.json&lt;/em&gt; in your project – but that’s a best practice anyway.&lt;/p&gt;

&lt;p&gt;The installation script is in the entry point file and NOT in the Dockerfile (as is usual in classic container use cases) because it makes it much easier to use an npm token and install private packages. All you need to do is add an NPM_TOKEN secret and use it in the entry point file (above &lt;em&gt;npm ci&lt;/em&gt;) by adding:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;npm config set //registry.npmjs.org/:_authToken=$NPM_TOKEN&lt;/em&gt;&lt;br&gt;
&lt;em&gt;node script.js $*&lt;/em&gt; runs the script and passes the action args as arguments to the script.&lt;/p&gt;
&lt;h2&gt;
  
  
  Node script tips and basic structure
&lt;/h2&gt;

&lt;p&gt;Over the years, I’ve developed a preferred structure for a node executable (CLI) script. I will share it here but for the purpose of this tutorial, this part is completely optional and at this stage, you’re more than ready to develop your own action in Node.js.&lt;/p&gt;

&lt;p&gt;The script looks like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The important section is at the bottom: &lt;em&gt;if (require.main === module)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It checks if the file was imported/required or if it’s the entrypoint into the program. This allows reusing the same module both programmatically and as a CLI tool.&lt;/p&gt;

&lt;p&gt;If this is the entrypoint, I would then parse the command line arguments (using &lt;a href="https://github.com/tj/commander.js/" rel="noopener noreferrer"&gt;commander&lt;/a&gt;) passed in from &lt;em&gt;entrypoint.sh&lt;/em&gt;. The arguments were injected into &lt;em&gt;entrypoint.sh&lt;/em&gt; by GitHub from the workflow file through the container (more on that later).&lt;/p&gt;

&lt;p&gt;I then invoke the main function. Since it’s an async function, I handle its return value with a then clause and handle failure with a catch clause.&lt;/p&gt;

&lt;p&gt;It’s also useful to read the event, provided by GitHub, and use it in the script:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;const event = JSON.parse(fs.readFileSync('/github/workflow/event.json', 'utf8'))&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets, Args, and the execution environment
&lt;/h2&gt;

&lt;p&gt;The Actions environment takes some getting used to. Although GitHub provides great tutorials on all things workflow related, I wanted to mention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub provides many &lt;a href="https://developer.github.com/actions/creating-github-actions/accessing-the-runtime-environment/#environment-variables" rel="noopener noreferrer"&gt;environment variables&lt;/a&gt; inside the container running the Action, but most of the information can also be retrieved from the event file (see section above).&lt;/li&gt;
&lt;li&gt;Others are defined in the workflow file, while some are provided as part of the secrets mechanism in GitHub.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://developer.github.com/actions/creating-workflows/storing-secrets/#github-token-secret" rel="noopener noreferrer"&gt;Secrets&lt;/a&gt; are pretty straightforward, you define them in the repo settings tab and then they’re exposed as environment variables inside the container.&lt;/p&gt;

&lt;p&gt;Defining the secret in the repo settings tab:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fdatree.staging.wpengine.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fp1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fdatree.staging.wpengine.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fp1.png" alt="secrets tab"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the secret in your workflow file:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fdatree.staging.wpengine.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fdatree.staging.wpengine.com%2Fwp-content%2Fuploads%2F2019%2F02%2Fp2.png" alt="secrets code"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The only exception is the GitHub token, which you don’t need to define in the settings. The token is only exposed in the workflow file and GitHub will provide the token itself with these &lt;a href="https://developer.github.com/actions/creating-workflows/storing-secrets/#github-token-secret" rel="noopener noreferrer"&gt;permissions&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another important item to note is the mounted folder GitHub provides. It’s mounted under /github and provides a couple of useful things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the event under &lt;em&gt;/github/workflow/event.json&lt;/em&gt; and&lt;/li&gt;
&lt;li&gt;the repo where the action runs under &lt;em&gt;/github/workspace/REPO_NAME&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More information on the mechanics of the mount can be found &lt;a href="https://developer.github.com/actions/creating-github-actions/creating-a-docker-container/#workdir" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This tutorial provides a good starting point for anyone who wants to create their first Node.js action. My action can be found &lt;a href="https://github.com/marketplace/actions/validate-license-action" rel="noopener noreferrer"&gt;here&lt;/a&gt; and the code for it &lt;a href="https://github.com/datreeio/validate-license-action" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you’re interested in learning more about how you can use Github Actions to automate git workflows, check out this &lt;a href="https://pages.datree.io/building-a-dev-pipeline-using-github-actions-node.js-and-aws-ecs-fargate" rel="noopener noreferrer"&gt;webinar&lt;/a&gt; to watch how Shimon Tolts, Datree.io Co-founder, &lt;strong&gt;"built a CI/CD dev pipeline with Github Actions, Node.js, Docker, and AWS Fargate"&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you have any questions, corrections, or suggestions please comment below or contact me directly.&lt;/p&gt;

</description>
      <category>github</category>
      <category>actions</category>
      <category>devops</category>
      <category>gitops</category>
    </item>
    <item>
      <title>How to Get More Out of Your Git Commit Message</title>
      <dc:creator>Eyar Zilberman</dc:creator>
      <pubDate>Thu, 28 Mar 2019 12:06:00 +0000</pubDate>
      <link>https://dev.to/datreeio/how-to-get-more-out-of-your-git-commit-message-59bj</link>
      <guid>https://dev.to/datreeio/how-to-get-more-out-of-your-git-commit-message-59bj</guid>
      <description>&lt;h1&gt;
  
  
  Git commit message
&lt;/h1&gt;

&lt;p&gt;If you aren’t already, &lt;a href="http://www.commitlogsfromlastnight.com/"&gt;defining your project Git commit message&lt;/a&gt; convention is always on every developer’s “To-Do” list. Like flossing your teeth – everyone knows it’s necessary best practice for healthy gums and avoiding the dentist, but it ends up on the ‘I’ll move it to tomorrow’s to-do list’ aka, procrastination galore.&lt;/p&gt;

&lt;p&gt;I’m going to break down the reasons why you really have NO excuse not to set a Git commit message convention (or floss), and the required steps in order to move this task from your “To-Do” list to “DONE” in a few simple steps!&lt;/p&gt;

&lt;p&gt;I’ll leave your dentist to yell at you about not flossing 😁&lt;/p&gt;

&lt;h1&gt;
  
  
  Why use a commit message convention?
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Better collaboration between potential and existing committers
&lt;/h3&gt;

&lt;p&gt;It is important to communicate the nature of changes in projects in order to foster transparency to a slew of people: existing teammates, future contributors, and sometimes to the public and other stakeholders. It’s obvious why a well-formatted Git commit message convention is the best way to communicate context about a change to fellow developers (and their future selves) when requesting a peer code review. A commit message convention also makes it easier to explore a more structured commit history and to understand which notable changes have been made between each release (or version) of the project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Squeeze the most out of git utilities
&lt;/h3&gt;

&lt;p&gt;“$ &lt;a href="https://git-scm.com/docs/git-log"&gt;git log&lt;/a&gt;” is a beautiful and useful snippet. A well-organized commit message history leads to more readable messages that are easy to follow when looking through the project history. Suddenly, navigating through the log output become a possible mission! Embracing a commit message convention will also help you properly use other git commands like git blame, git revert, git rebase, git shortlog and other git subcommands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Support different automation tools
&lt;/h3&gt;

&lt;p&gt;Automation, automation, automation. Once you know you can rely on a standardized Git commit message, you can start building a flow around it and leverage the power of automation to level-up your project development flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic generation of CHANGELOG – keeps everyone up to date on &lt;a href="https://github.com/lob/generate-changelog"&gt;what happened between releases&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Automatic bump ups to the correct version – determine a &lt;a href="https://github.com/semantic-release/semantic-release"&gt;release semantic version&lt;/a&gt; based on the types of commits made per release.&lt;/li&gt;
&lt;li&gt;Automatic triggers to other processes – you are only limited by your own imagination on this one. For example, you can decide that a predefined string in the commit message &lt;a href="https://github.com/jenkinsci/commit-message-trigger-plugin"&gt;will trigger your CI&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Choosing the right commit message convention
&lt;/h1&gt;

&lt;p&gt;Now that we know that having a commit message convention is useful whether you’re working on an open source project, working on your own, or working with your team on a single project, standardizing the Git commit message is the only right way to commit!&lt;/p&gt;

&lt;p&gt;We covered the “why” part and now we will move to the “how” part – in my opinion, there are pretty much only two ways to go:&lt;/p&gt;

&lt;h3&gt;
  
  
  A. Adopt defacto best practices
&lt;/h3&gt;

&lt;p&gt;This approach is a simple and easy guideline, good for getting used to the idea of having a convention/have a majority of coders or junior developers on the team. It’s the top 5 best practices to implement TODAY:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have a commit message – white space or no characters at all can’t be a good description for any code change.&lt;/li&gt;
&lt;li&gt;Keep a short subject line – long subjects won’t look good when executing some git commands. Limit the subject line to 50 characters.&lt;/li&gt;
&lt;li&gt;Don’t end the subject line with a period – it’s unnecessary. Especially when you are trying to keep the commit title to under 50 characters.&lt;/li&gt;
&lt;li&gt;Start with a capital letter – straight from the source: “this is as simple as it sounds. Begin all subject lines with a capital letter”.&lt;/li&gt;
&lt;li&gt;Link to a planning system – if you are working with a planning system (like Jira), it is important to create a logical link between the planning ticket number and the subsequent code change.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  B. Adopt an existing conventions framework
&lt;/h3&gt;

&lt;p&gt;This approach is relevant for advanced/engaged teams, the key benefit of this approach is that you can also use the supporting tools in the ecosystem of the chosen conventions. There are plenty of different conventions so I will focus on the top two:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/angular/angular/blob/master/CONTRIBUTING.md#commit"&gt;Angular Git commit message guidelines&lt;/a&gt; – well known and proven Git commit message convention which was introduced by the Angular project (A.K.A. Google).&lt;/li&gt;
&lt;li&gt;Emoji Git commit message convention – I’m not kidding, &lt;a href="https://github.com/carloscuesta/gitmoji"&gt;it’s a thing&lt;/a&gt;. Incorporating emoji in the commit message is an easy way of identifying the purpose or intention of a commit at a glance, and of course, emoji are fun 😜. Because this convention is a philosophy and not a method, if chosen, I would recommend &lt;a href="https://opensource.com/article/19/2/emoji-log-git-commit-messages"&gt;“Emoji-Log” commit message convention (by Ahmad Awais)&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  How to enforce Git commit message?
&lt;/h1&gt;

&lt;p&gt;If you got this far, you probably agree with my opinion that every project should have a defined commit message convention. Now, the question is how can I make sure all the project committers (me, outside contributors and teammates) are aligned about the chosen convention and apply to it? My top two solutions for that are:&lt;/p&gt;

&lt;h3&gt;
  
  
  🔧 1. Git hooks
&lt;/h3&gt;

&lt;h3&gt;
  
  
  🚔 2. Server-side policy enforcement
&lt;/h3&gt;

&lt;p&gt;Both options explained with a guide on my &lt;a href="https://datree.io/blog/git-commit-message-conventions-for-readable-git-log/?source=dev.to"&gt;original blog post&lt;/a&gt; (I didn't add it here to keep this post short).&lt;/p&gt;

</description>
      <category>git</category>
      <category>commitmessage</category>
    </item>
  </channel>
</rss>
