<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kamesh Sampath</title>
    <description>The latest articles on DEV Community by Kamesh Sampath (@kameshsampath).</description>
    <link>https://dev.to/kameshsampath</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kameshsampath"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Thu, 27 Feb 2025 18:27:32 +0000</pubDate>
      <link>https://dev.to/kameshsampath/-5ahf</link>
      <guid>https://dev.to/kameshsampath/-5ahf</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/kameshsampath" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F871628%2F6ce3bf5a-487a-4b59-a0d5-962a8cd15a37.jpeg" alt="kameshsampath"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kameshsampath/lets-build-together-a-local-playground-for-apache-polaris-28l5" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Let's Build Together: A Local Playground for Apache Polaris&lt;/h2&gt;
      &lt;h3&gt;Kamesh Sampath ・ Feb 25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#apachepolaris&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#snowflake&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#localstack&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>apachepolaris</category>
      <category>snowflake</category>
      <category>kubernetes</category>
      <category>localstack</category>
    </item>
    <item>
      <title>Let's Build Together: A Local Playground for Apache Polaris</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Tue, 25 Feb 2025 04:01:17 +0000</pubDate>
      <link>https://dev.to/kameshsampath/lets-build-together-a-local-playground-for-apache-polaris-28l5</link>
      <guid>https://dev.to/kameshsampath/lets-build-together-a-local-playground-for-apache-polaris-28l5</guid>
      <description>&lt;h2&gt;
  
  
  Why I Built a Developer-First Apache Polaris Starter Kit ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6561mm5x9ows0zwinl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fid6561mm5x9ows0zwinl.png" alt="Photo by Maxime Agnelli on Unsplash" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As builders, we all know the pain of setting up a new development environment. Hours spent configuring dependencies, troubleshooting integration issues, and getting different services to play nicely together. When I started working with &lt;a href="https://github.com/apache/arrow-datafusion-python" rel="noopener noreferrer"&gt;Apache Polaris&lt;/a&gt;, I faced these same challenges – and decided to do something about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: Getting Started with Apache Polaris
&lt;/h2&gt;

&lt;p&gt;Apache Polaris is a powerful open source Iceberg REST catalog implementation, originally contributed to the Apache Software Foundation by Snowflake. This donation to open source has made enterprise-grade data catalog capabilities accessible to the broader data community via simple REST APIs. &lt;/p&gt;

&lt;p&gt;Setting up Polaris in a development environment can be challenging. You need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A robust container orchestration platform&lt;/li&gt;
&lt;li&gt;A working metastore (typically PostgreSQL)&lt;/li&gt;
&lt;li&gt;S3-compatible storage&lt;/li&gt;
&lt;li&gt;Various security configurations and credentials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these components requires careful setup and configuration. For builders just getting started or wanting to experiment with Polaris, this overhead can be a significant barrier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: A Complete Development Environment
&lt;/h2&gt;

&lt;p&gt;This is why I created &lt;a href="https://github.com/Snowflake-Labs/polaris-local-forge" rel="noopener noreferrer"&gt;an open source starter kit&lt;/a&gt; that provides everything needed to get Polaris up and running in a local development environment. The project follows the true spirit of open source collaboration, building upon and integrating with other excellent open source tools in the ecosystem.&lt;/p&gt;

&lt;p&gt;The kit automates the setup of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A lightweight &lt;a href="https://k3s.io" rel="noopener noreferrer"&gt;k3s&lt;/a&gt; Kubernetes cluster using &lt;a href="https://k3d.io" rel="noopener noreferrer"&gt;k3d&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://localstack.cloud" rel="noopener noreferrer"&gt;LocalStack&lt;/a&gt; for AWS S3 emulation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.postgresql.org" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; metastore with proper configurations&lt;/li&gt;
&lt;li&gt;All necessary security credentials and configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A key aspect of this starter kit is its comprehensive automation using &lt;a href="https://www.ansible.com" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt;. The &lt;code&gt;polaris-forge-setup&lt;/code&gt; directory houses Ansible playbooks that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate the entire setup process&lt;/li&gt;
&lt;li&gt;Verify if components are ready for use&lt;/li&gt;
&lt;li&gt;Handle catalog setup and configuration&lt;/li&gt;
&lt;li&gt;Provide cleanup capabilities for development iterations&lt;/li&gt;
&lt;li&gt;Enable smooth transitions to higher environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This automation-first approach serves two purposes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Immediate Development&lt;/strong&gt;: Developers can get started quickly with minimal manual intervention&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production Readiness&lt;/strong&gt;: The Ansible scripts serve as a template for scaling to higher environments, making it easier to adapt the setup for staging or production use cases&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By keeping everything open source and focusing on community-driven development, we ensure that builders can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learn from the implementation&lt;/li&gt;
&lt;li&gt;Customize for their specific needs&lt;/li&gt;
&lt;li&gt;Contribute improvements back to the community&lt;/li&gt;
&lt;li&gt;Build upon a foundation of trusted open source tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Snowflake OpenCatalog?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://other-docs.snowflake.com/en/opencatalog/overview" rel="noopener noreferrer"&gt;Snowflake OpenCatalog&lt;/a&gt; is an enterprise-grade implementation and managed service of upstream Polaris, making it incredibly easy to integrate with your existing data stack. By handling the operational complexities of running Polaris at scale, it allows teams to focus on their data applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Managed Infrastructure&lt;/strong&gt;: Snowflake handles all operational aspects including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polaris server management and scaling&lt;/li&gt;
&lt;li&gt;Security and access control&lt;/li&gt;
&lt;li&gt;High availability and reliability&lt;/li&gt;
&lt;li&gt;Regular updates and maintenance&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Enterprise Integration&lt;/strong&gt;: Seamless connectivity with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Snowflake's ecosystem of data services&lt;/li&gt;
&lt;li&gt;Popular query engines and tools&lt;/li&gt;
&lt;li&gt;Existing data governance frameworks&lt;/li&gt;
&lt;li&gt;Enterprise security systems&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Production-Ready Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Advanced access controls and auditing&lt;/li&gt;
&lt;li&gt;Cross-region and cross-cloud support&lt;/li&gt;
&lt;li&gt;Enterprise-grade SLAs&lt;/li&gt;
&lt;li&gt;Professional support&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  From Local Development to Enterprise Scale
&lt;/h3&gt;

&lt;p&gt;This starter kit provides an ideal path for builders working with Apache Polaris and considering OpenCatalog for production deployment. By working with the upstream version in this development environment, you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gain hands-on experience with core concepts&lt;/li&gt;
&lt;li&gt;Understand the underlying architecture&lt;/li&gt;
&lt;li&gt;Can prototype and test implementations&lt;/li&gt;
&lt;li&gt;Build expertise that transfers to OpenCatalog&lt;/li&gt;
&lt;li&gt;Have a clear path to production scaling&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you're ready to move to production, the concepts and patterns you've learned here will help you make the most of OpenCatalog's enterprise capabilities while Snowflake handles the operational complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Design Decisions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Why Kubernetes with k3s and k3d?
&lt;/h3&gt;

&lt;p&gt;While Docker Compose is often the go-to choice for local development environments, Apache Polaris's distributed nature benefits significantly from Kubernetes's capabilities. Here's why:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Advanced Networking&lt;/strong&gt;: Kubernetes provides sophisticated networking between components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic service discovery and DNS resolution&lt;/li&gt;
&lt;li&gt;Internal load balancing for scalable services&lt;/li&gt;
&lt;li&gt;Ingress management for external access&lt;/li&gt;
&lt;li&gt;Network policies for traffic control&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Declarative Configuration&lt;/strong&gt;: Using tools like Helm and Kustomize, we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain separate configurations for different environments&lt;/li&gt;
&lt;li&gt;Version control our infrastructure setup&lt;/li&gt;
&lt;li&gt;Apply consistent changes across deployments&lt;/li&gt;
&lt;li&gt;Manage complex dependencies between services&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Reliable State Management&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;StatefulSets for databases and stateful services&lt;/li&gt;
&lt;li&gt;PersistentVolumes for durable storage&lt;/li&gt;
&lt;li&gt;Backup and restore capabilities&lt;/li&gt;
&lt;li&gt;Data replication when needed&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Security and Configuration&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Native secrets management&lt;/li&gt;
&lt;li&gt;Role-Based Access Control (RBAC)&lt;/li&gt;
&lt;li&gt;ConfigMaps for configuration management&lt;/li&gt;
&lt;li&gt;Service accounts for component authentication&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Production Readiness&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Same tools and patterns used in production&lt;/li&gt;
&lt;li&gt;Easy scaling of components&lt;/li&gt;
&lt;li&gt;Built-in monitoring and logging&lt;/li&gt;
&lt;li&gt;Consistent behavior across environments&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I specifically chose k3s because it's lightweight and perfect for development environments. Using k3d allows us to run k3s in Docker containers, making it even more convenient for local development. It provides a full Kubernetes experience without the resource overhead of something like minikube.&lt;/p&gt;

&lt;h3&gt;
  
  
  LocalStack for S3 Integration
&lt;/h3&gt;

&lt;p&gt;While we could have required developers to use actual AWS S3, LocalStack provides a perfect local alternative. It emulates AWS services locally, which means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No cloud costs during development&lt;/li&gt;
&lt;li&gt;No need for AWS credentials&lt;/li&gt;
&lt;li&gt;Faster development cycles&lt;/li&gt;
&lt;li&gt;Ability to work offline&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PostgreSQL as the Metastore
&lt;/h3&gt;

&lt;p&gt;PostgreSQL was a natural choice for the metastore. It's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Well-documented and widely used&lt;/li&gt;
&lt;li&gt;Easy to containerize&lt;/li&gt;
&lt;li&gt;Highly reliable&lt;/li&gt;
&lt;li&gt;Supported out of the box by Polaris&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kustomize for Deployment Management
&lt;/h3&gt;

&lt;p&gt;Kustomize allows us to manage Kubernetes manifests in a clean, declarative way. It makes it easy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain different configurations for different environments&lt;/li&gt;
&lt;li&gt;Override settings without modifying base configurations&lt;/li&gt;
&lt;li&gt;Keep configurations DRY and maintainable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Let me walk you through how to get up and running with this starter kit. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure you have the prerequisites installed:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Required tools and their version checks:&lt;/span&gt;

&lt;span class="c"&gt;# Docker (Desktop or Engine)&lt;/span&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Example output: Docker version 24.0.7&lt;/span&gt;

&lt;span class="c"&gt;# Kubernetes CLI&lt;/span&gt;
kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt;
&lt;span class="c"&gt;# Example output: Client Version: v1.28.2&lt;/span&gt;

&lt;span class="c"&gt;# k3d (&amp;gt;= 5.0.0)&lt;/span&gt;
k3d version
&lt;span class="c"&gt;# Example output: k3d version v5.6.0&lt;/span&gt;

&lt;span class="c"&gt;# Python (&amp;gt;= 3.11)&lt;/span&gt;
python &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Example output: Python 3.12.1&lt;/span&gt;

&lt;span class="c"&gt;# uv (Python packaging tool)&lt;/span&gt;
uv &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Example output: uv 0.1.12&lt;/span&gt;

&lt;span class="c"&gt;# Task&lt;/span&gt;
task &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Example output: Task version: v3.34.1&lt;/span&gt;

&lt;span class="c"&gt;# LocalStack (&amp;gt;= 3.0.0)&lt;/span&gt;
localstack &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;span class="c"&gt;# Example output: 3.0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Sign-up for &lt;a href="https://app.localstack.cloud/sign-up" rel="noopener noreferrer"&gt;Localstack&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Initial Setup
&lt;/h3&gt;

&lt;p&gt;Clone the repository and set up your environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/snowflake-labs/polaris-local-forge
&lt;span class="nb"&gt;cd &lt;/span&gt;polaris-local-forge

&lt;span class="c"&gt;# Set up environment variables&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PROJECT_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;/.kube/config"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;K3D_CLUSTER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;polaris-local-forge
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;K3S_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.32.1-k3s1
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;FEATURES_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;/k8s"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Python Environment Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install uv&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;uv

&lt;span class="c"&gt;# Set up Python environment&lt;/span&gt;
uv python pin 3.12
uv venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate  &lt;span class="c"&gt;# On Unix-like systems&lt;/span&gt;
uv &lt;span class="nb"&gt;sync&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy the Environment
&lt;/h3&gt;

&lt;p&gt;The setup process is automated through several scripts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate required sensitive files&lt;/span&gt;
&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/polaris-forge-setup/prepare.yml

&lt;span class="c"&gt;# Create and set up the cluster&lt;/span&gt;
&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/bin/setup.sh

&lt;span class="c"&gt;# Wait for deployments to be ready&lt;/span&gt;
&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/polaris-forge-setup/cluster_checks.yml &lt;span class="nt"&gt;--tags&lt;/span&gt; namespace,postgresql,localstack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy Polaris
&lt;/h3&gt;

&lt;p&gt;This is where things get interesting - deploying Polaris itself. You have two options for the container images:&lt;/p&gt;

&lt;h4&gt;
  
  
  Option 1: Use Pre-built Images
&lt;/h4&gt;

&lt;p&gt;Apache Polaris doesn't currently publish official images, but you can use our pre-built images with PostgreSQL dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull ghcr.io/snowflake-labs/polaris-local-forge/apache-polaris-server-pgsql
docker pull ghcr.io/snowflake-labs/polaris-local-forge/apache-polaris-admin-tool-pgsql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Option 2: Build Images Locally
&lt;/h4&gt;

&lt;p&gt;Alternatively, you can build the images from source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Update IMAGE_REGISTRY in Taskfile.yml, then run:&lt;/span&gt;
task images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you choose to build locally, remember to update the image references in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;k8s/polaris/deployment.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;k8s/polaris/bootstrap.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;k8s/polaris/purge.yaml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Deploy and Verify
&lt;/h4&gt;

&lt;p&gt;Apply the Kubernetes manifests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply Polaris manifests&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/k8s/polaris

&lt;span class="c"&gt;# Verify deployments and jobs&lt;/span&gt;
&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/polaris-forge-setup/cluster_checks.yml &lt;span class="nt"&gt;--tags&lt;/span&gt; polaris
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Setting Up Your First Catalog
&lt;/h3&gt;

&lt;p&gt;Before creating your first catalog, configure your AWS environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ENDPOINT_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localstack.localstack:14566
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test
export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test
export &lt;/span&gt;&lt;span class="nv"&gt;AWS_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;us-east-1

&lt;span class="c"&gt;# Run the catalog setup&lt;/span&gt;
&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/polaris-forge-setup/catalog_setup.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt;: You can customize the default catalog settings by modifying values in &lt;a href="https://github.com/Snowflake-Labs/polaris-local-forge/blob/main/polaris-forge-setup/defaults/main.yml" rel="noopener noreferrer"&gt;polaris-forge-setup/defaults/main.yml&lt;/a&gt;. This file contains configurable parameters for your catalog, principal roles, and permissions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Play with the Catalog
&lt;/h3&gt;

&lt;p&gt;Once your catalog is set up, you can explore its functionality using the provided Jupyter notebook. The notebook &lt;a href="https://github.com/Snowflake-Labs/polaris-local-forge/blob/main/notebooks/verify_setup.ipynb" rel="noopener noreferrer"&gt;notebooks/verify_setup.ipynb&lt;/a&gt; walks you through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a namespace&lt;/li&gt;
&lt;li&gt;Defining a table&lt;/li&gt;
&lt;li&gt;Inserting sample data&lt;/li&gt;
&lt;li&gt;Verifying data storage in LocalStack&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This hands-on exploration helps you understand how Polaris integrates with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The PostgreSQL metastore for catalog management&lt;/li&gt;
&lt;li&gt;LocalStack's S3 emulation for data storage&lt;/li&gt;
&lt;li&gt;The overall Apache Iceberg table format structure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can visually verify your setup by checking the LocalStack console at &lt;a href="https://app.localstack.cloud/inst/default/resources/s3/polardb" rel="noopener noreferrer"&gt;https://app.localstack.cloud/inst/default/resources/s3/polardb&lt;/a&gt;, where you'll see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Catalog storage structure&lt;/li&gt;
&lt;li&gt;Metadata files&lt;/li&gt;
&lt;li&gt;Actual data files&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Video Walkthrough
&lt;/h3&gt;

&lt;p&gt;For a detailed visual guide of setting up and using this development environment, check out my walkthrough video:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/LvIUv3JtUNs" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsbcvh52bj6fszg39r0n.jpg" alt="Apache Polaris Local Development Setup" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This video demonstrates the entire process from initial setup to running your first queries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Tips
&lt;/h2&gt;

&lt;p&gt;If you run into issues, here are some helpful commands for debugging:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check Polaris server logs&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; polaris deployment/polaris

&lt;span class="c"&gt;# Check PostgreSQL logs&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; polaris statefulset/postgresql

&lt;span class="c"&gt;# Check LocalStack logs&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; localstack deployment/localstack

&lt;span class="c"&gt;# Check events in the polaris namespace&lt;/span&gt;
kubectl get events &lt;span class="nt"&gt;-n&lt;/span&gt; polaris &lt;span class="nt"&gt;--sort-by&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'.lastTimestamp'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Impact: Streamlined Development Experience
&lt;/h2&gt;

&lt;p&gt;With this starter kit, what used to take days of setup and configuration now takes minutes. Builders can focus on creating and experimenting with Polaris rather than wrestling with infrastructure setup.&lt;/p&gt;

&lt;p&gt;The kit is open source and available on &lt;a href="https://github.com/Snowflake-Labs/polaris-local-forge" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;. I welcome contributions and feedback from the community. Together, we can make the development experience even better for everyone working with Apache Polaris.&lt;/p&gt;

&lt;p&gt;Building should be about creating, not configuring. This starter kit aims to remove the friction from getting started with Apache Polaris, allowing builders to focus on what matters most – creating great applications.&lt;/p&gt;

&lt;p&gt;Dont forget to check another project where I used this starter kit &lt;a href="https://github.com/kameshsampath/balloon-popper-demo" rel="noopener noreferrer"&gt;https://github.com/kameshsampath/balloon-popper-demo&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Projects and Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Core Components
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/apache/arrow-datafusion-python" rel="noopener noreferrer"&gt;Apache Polaris&lt;/a&gt; - Data Catalog and Governance Platform&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://py.iceberg.apache.org/" rel="noopener noreferrer"&gt;PyIceberg&lt;/a&gt; - Python library for Apache Iceberg&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/localstack/localstack" rel="noopener noreferrer"&gt;LocalStack&lt;/a&gt; - AWS Cloud Service Emulator&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://k3d.io" rel="noopener noreferrer"&gt;k3d&lt;/a&gt; - k3s in Docker&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://k3s.io" rel="noopener noreferrer"&gt;k3s&lt;/a&gt; - Lightweight Kubernetes Distribution&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.ansible.com" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; - Automation Platform&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; - Container Platform&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; - Container Orchestration&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt; - Kubernetes Package Manager&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/reference/kubectl/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; - Kubernetes CLI&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/astral-sh/uv" rel="noopener noreferrer"&gt;uv&lt;/a&gt; - Python Packaging Tool&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>apachepolaris</category>
      <category>snowflake</category>
      <category>kubernetes</category>
      <category>localstack</category>
    </item>
    <item>
      <title>Elements of Event Driven Architecture(EDA)</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Wed, 08 Nov 2023 08:06:04 +0000</pubDate>
      <link>https://dev.to/kameshsampath/elements-of-event-driven-architectureeda-4jnf</link>
      <guid>https://dev.to/kameshsampath/elements-of-event-driven-architectureeda-4jnf</guid>
      <description>&lt;p&gt;Well, we encounter lots of data in our everyday life e.g. weather reports, flight timings, food deliveries etc., All these data are continuous and flowing from variety of sources. Such continuously flowing data is called a &lt;strong&gt;Data Stream&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu1f3s6d26psc7d26ue4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu1f3s6d26psc7d26ue4.png" alt="Data Stream" width="278" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A raw data(stream) is useless for any application unless it has some identifiable element to it. &lt;/p&gt;

&lt;p&gt;Let me explain it with few examples, &lt;em&gt;Joe joined Acme crop as a Developer on 25 October 2023&lt;/em&gt;, &lt;em&gt;Mini placed an order for two pizzas at 12:00 PM&lt;/em&gt;. In these data examples if we take out the verbs(actions) "joined" and "ordered" along with the time "25 Oct" and "12:00 PM" , the whole data become useless.&lt;br&gt;
 &lt;br&gt;
For an application to effectively use the data stream, the data in the stream should be an &lt;strong&gt;event&lt;/strong&gt;, a data that has &lt;strong&gt;time&lt;/strong&gt; and &lt;strong&gt;action&lt;/strong&gt; associated with it. A stream of data as events brings a great value to applications via analytics, triggers, chaining etc., &lt;/p&gt;

&lt;p&gt;With &lt;em&gt;Mini placed an order for two pizzas at 12:00 PM event&lt;/em&gt;, we could extract the following analytical information,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is Mini placing order for two pizzas everyday? &lt;/li&gt;
&lt;li&gt;Is the order placed at 12:00PM everyday?&lt;/li&gt;
&lt;li&gt;Is the order is delivered by the same food chain?&lt;/li&gt;
&lt;li&gt;What pizzas were ordered?&lt;/li&gt;
&lt;li&gt;Are food chains delivering the orders on time ?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The process of using the events and building such analytical information is called &lt;strong&gt;Data Processing&lt;/strong&gt;. Data Processing could be done by a human, an application, an IoT device etc.,&lt;/p&gt;

&lt;p&gt;With any data streaming scenario there is always two essential primitive entities,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event Producer&lt;/strong&gt; is someone or something that produces an event in our example above Mini is event producer who places the order for pizza.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0xbfsevupoo05wxpy42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0xbfsevupoo05wxpy42.png" alt="Event Producer" width="630" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event Consumer&lt;/strong&gt; is again someone or something that consumes or uses the event produced by the Event Producer. Taking the same Mini's order example there could be Pizza house that takes, processes and delivers the order.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq1eqm1g80iir5spkls3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcq1eqm1g80iir5spkls3.png" alt="Event Consumer" width="524" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we understood the Event Producers produces events that are being consumed by Event Consumers, software architectures started to evolve around building applications that act as an Event Producer/Consumer and leverage these streams of events. An architectural style of building  applications around events is called an &lt;strong&gt;Event Driven Architecture(EDA)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws0av94lexm19rlarpoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws0av94lexm19rlarpoq.png" alt="Event Driven Architecture" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When building applications using EDA, enforces few basic requirements on the Platform and a software Framework that will be used to build such applications. &lt;/p&gt;

&lt;p&gt;The platform that will be used to build the such applications need to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt; - as events are continuously flowing there could be sudden spikes to number of events that might come in, the platform need to be scalable or elastic to handle such spikes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Durable&lt;/strong&gt; - as events can be consumed immediately or bit late in time, the platform should support a mechanism of delivering events durably at time of need&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resilient&lt;/strong&gt; - The platform should be capable of handling failures and recovering from them without data loss&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Retention&lt;/strong&gt; - Retaining of data until a configurable amount time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responding to Events&lt;/strong&gt; - The platform should also be able to respond to events at bare minimum acknowledge the event on receive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ordering&lt;/strong&gt; - As events are associated with time, ordering of the events helps consumers who need to process them in specific order e.g within a time range or date range etc.,&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A sole platform alone might not be enough to build an effective EDA styled application. There is greater need for an integration into the platform via plugins, API etc., In other words, a framework that is extensible and pluggable, and works on common semantics. &lt;/p&gt;

&lt;p&gt;The framework should support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Sources&lt;/strong&gt; - the sources from where the events are generated i.e. Event Producers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Sinks&lt;/strong&gt; - the destinations where the processed event is drained into i.e. Event Consumers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API&lt;/strong&gt; - An interface to connect and work with the platform, data sources and data sinks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkcphngtg3ism9a097er.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkcphngtg3ism9a097er.png" alt="Platform and Integration" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some great of Data Streaming platforms,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kafka.apache.org" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt; - Developed at LinkedIn and Opensources to Apache Software Foundation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://redpanda.com" rel="noopener noreferrer"&gt;Redpanda&lt;/a&gt; - is a simple, powerful, and cost-efficient streaming data platform that is compatible with Kafka® APIs while eliminating Kafka complexity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apache Kafka also supports processing of streaming data using its ecosystem of &lt;a href="https://kafka.apache.org/documentation/streams/" rel="noopener noreferrer"&gt;Streaming API&lt;/a&gt; and &lt;a href="https://ksqldb.io/" rel="noopener noreferrer"&gt;ksqlDB&lt;/a&gt;. But for an effective architecture, it is always nice to have the core data streaming and data processing to be decoupled(&lt;a href="https://en.wikipedia.org/wiki/Separation_of_concerns" rel="noopener noreferrer"&gt;Separation of Concerns&lt;/a&gt;). Such decoupling helps in processing data from heterogeneous sources e.g. Apache Kafka, Database, File System CSV files etc., &lt;/p&gt;

&lt;p&gt;&lt;a href="https://flink.apache.org" rel="noopener noreferrer"&gt;Apache Flink&lt;/a&gt; is one such framework and distributed processing engine for stateful computations over unbounded(Apache Kafka) and bounded data streams(Database).&lt;/p&gt;

&lt;p&gt;Just to summarise we learnt,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is a Data Stream and an Event&lt;/li&gt;
&lt;li&gt;What is a Data Producer and Data Consumer&lt;/li&gt;
&lt;li&gt;An architecture style that is used to build application around Events&lt;/li&gt;
&lt;li&gt;An effective EDA platform&lt;/li&gt;
&lt;li&gt;Some great platform and frameworks that could be used to build EDA applications.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>datastreaming</category>
      <category>kafka</category>
      <category>dataprocessing</category>
      <category>basics</category>
    </item>
    <item>
      <title>Trigger CI using Terraform Cloud</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Mon, 10 Apr 2023 01:23:33 +0000</pubDate>
      <link>https://dev.to/kameshsampath/trigger-ci-using-terraform-cloud-1mao</link>
      <guid>https://dev.to/kameshsampath/trigger-ci-using-terraform-cloud-1mao</guid>
      <description>&lt;p&gt;Continuous Integration(CI) pipelines needs a &lt;strong&gt;target&lt;/strong&gt; infrastructure to which the CI artifacts are deployed. The deployments are handled by CI or we can leverage Continuous Deployment pipelines. Modern day architecture uses automation tools like &lt;a href="https://terraform.io" rel="noopener noreferrer"&gt;terraform&lt;/a&gt;, &lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;ansible&lt;/a&gt; to provision the target infrastructure, this type of provisioning is called &lt;a href="https://en.wikipedia.org/wiki/Infrastructure_as_code" rel="noopener noreferrer"&gt;IaC&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Usually CI/CD and IaC don't run in tandem. Many times we want to trigger the CI pipeline only when the &lt;strong&gt;target&lt;/strong&gt; infrastructure is ready to bootstrap with software components that are required by CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;As part of this DIY blog let us tackle the aforementioned problem with an use case. &lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;As CI/CD user I would like to provision a Kubernetes Cluster on Google Cloud Platform(GKE) using Terraform. The successful provision of the cluster should &lt;strong&gt;notify&lt;/strong&gt; a CI pipeline to start bootstrapping &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; on to GKE.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa56swhyp5lzm9n5d9kpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa56swhyp5lzm9n5d9kpr.png" alt="Architecture Overview" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need ?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://app.terraform.io/public/signup/account" rel="noopener noreferrer"&gt;Terraform Cloud Account&lt;/a&gt;. Create a workspace on the terraform cloud to be used for this exercise.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud Account&lt;/a&gt; used to create the Google Kubernetes Engine(GKE) cluster.&lt;/li&gt;
&lt;li&gt;Though we can use any CI platform, for this demo we will use &lt;a href="https://www.harness.io/products/continuous-integration" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt;as our CI platform. You can do a &lt;strong&gt;free tier&lt;/strong&gt; signup from &lt;a href="https://app.harness.io/auth/#/signup/?module=ci&amp;amp;utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-tfc-ci-demos&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Demo Sources
&lt;/h2&gt;

&lt;p&gt;The demo uses the following git repositories a sources,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IaC &lt;a href="https://github.com/harness-apps/vanilla-gke" rel="noopener noreferrer"&gt;vanilla-gke&lt;/a&gt;: the terraform source repository that will be used with terraform cloud to provision GKE.&lt;/li&gt;
&lt;li&gt;Kubernetes manifests &lt;a href="https://github.com/harness-apps/bootstrap-gke" rel="noopener noreferrer"&gt;bootstrap-argocd&lt;/a&gt;: the repository that holds kubernetes manifests to bootstrap argo CD on to the GKE cluster&lt;/li&gt;
&lt;li&gt;Harness CI Pipeline &lt;a href="https://github.com/harness-apps/tfc-notification-demo" rel="noopener noreferrer"&gt;tfc-notification-demo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Fork and Clone the Sources
&lt;/h3&gt;

&lt;p&gt;To make fork and clone easier we will use &lt;a href="https://cli.github.com/" rel="noopener noreferrer"&gt;gh CLI&lt;/a&gt;. Download the add &lt;code&gt;gh&lt;/code&gt; to your &lt;code&gt;$PATH&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let us create a directory where we want to place all our demo sources,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/tfc-notification-demo"&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="s2"&gt;/tfc-notification-demo"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEMO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  IaC
&lt;/h4&gt;

&lt;p&gt;Clone and fork &lt;code&gt;vanilla-gke&lt;/code&gt; repo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh repo clone harness-apps/vanilla-gke
&lt;span class="nb"&gt;cd &lt;/span&gt;vanilla-gke
gh repo fork
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TFC_GKE_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Bootstrap Argo CD Sources
&lt;/h4&gt;

&lt;p&gt;Clone and fork &lt;code&gt;bootstrap-argocd&lt;/code&gt; repo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ..
gh repo clone harness-apps/bootstrap-argocd
&lt;span class="nb"&gt;cd &lt;/span&gt;bootstrap-argocd
gh repo fork
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ARGOCD_BOOTSTRAP_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Harness CI Pipeline
&lt;/h4&gt;

&lt;p&gt;Clone and fork &lt;code&gt;tfc-notification-demo&lt;/code&gt; repo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ..
gh repo clone harness-apps/tfc-notification-demo
&lt;span class="nb"&gt;cd &lt;/span&gt;tfc-notification-demo
gh repo fork
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TFC_DEMO_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For rest of the blog we will reference the repositories &lt;code&gt;vanilla-gke&lt;/code&gt; and &lt;code&gt;bootstrap-argocd&lt;/code&gt; and &lt;code&gt;tfc-notification-demo&lt;/code&gt; as &lt;code&gt;$TFC_GKE_REPO&lt;/code&gt;, &lt;code&gt;$ARGOCD_BOOTSTRAP_REPO&lt;/code&gt; and &lt;code&gt;$TFC_DEMO_REPO&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness CI
&lt;/h2&gt;

&lt;p&gt;In the following sections we will define and create the resources required to define a CI pipeline using Harness platform.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Harness Project
&lt;/h3&gt;

&lt;p&gt;Create new harness project named &lt;code&gt;terraform_integration_demos&lt;/code&gt; using Harness Web Console,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2xlbd9079oesmni06ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2xlbd9079oesmni06ut.png" alt="New Harness Project" width="800" height="1302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update its details as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hw2w4j51yosjj3in819.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hw2w4j51yosjj3in819.png" alt="New Harness Project Details" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow the wizard leaving rest to defaults and on the last screen choose &lt;strong&gt;Continuous Integration&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2emzksrd7wplaapqjlj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2emzksrd7wplaapqjlj.png" alt="Use CI" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Go to Module&lt;/strong&gt; to go to project home page.&lt;/p&gt;

&lt;h3&gt;
  
  
  Define New Pipeline
&lt;/h3&gt;

&lt;p&gt;Click &lt;strong&gt;Pipelines&lt;/strong&gt; to define a new pipeline,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgads8qcwj6t8y7oeg4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgads8qcwj6t8y7oeg4m.png" alt="Get Started with CI" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demo will be doing manual clone, hence disable the clone,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9r24aycvqp0noecy2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf9r24aycvqp0noecy2o.png" alt="Disable" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Pipelines&lt;/strong&gt; and delete the default &lt;strong&gt;Build pipeline&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdmpydgk99elqufic570.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftdmpydgk99elqufic570.png" alt="Delete Pipeline" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Add &lt;code&gt;harnessImage&lt;/code&gt; Docker Registry Connector
&lt;/h3&gt;

&lt;p&gt;As part of pipelines we will be pulling image from DockerHub. &lt;code&gt;harnesImage&lt;/code&gt; &lt;a href="//ttps://developer.harness.io/docs/platform/connectors/connect-to-harness-container-image-registry-using-docker-connector"&gt;Docker Registry Connector&lt;/a&gt; helps pulling the public Docker Hub images as an anonymous user.&lt;/p&gt;

&lt;p&gt;Let us configure an &lt;code&gt;harnesImage&lt;/code&gt; connector as described in docker registry connectors. The pipelines we create as part of the later section will use this connector.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure GitHub
&lt;/h3&gt;

&lt;h4&gt;
  
  
  GitHub Credentials
&lt;/h4&gt;

&lt;p&gt;Create a &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noopener noreferrer"&gt;GitHub PAT&lt;/a&gt; for the account where you have have forked the repositories &lt;code&gt;$TFC_GKE_REPO&lt;/code&gt; and &lt;code&gt;$ARGOCD_BOOTSTRAP_REPO&lt;/code&gt;. We will refer to the token as &lt;code&gt;$GITHUB_PAT&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;From the &lt;strong&gt;Project Setup&lt;/strong&gt; click &lt;strong&gt;Secrets&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21zbiyr2gdqjt4j0wrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21zbiyr2gdqjt4j0wrl.png" alt="New Text Secret" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update the encrypted text secret details as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bijo1z8gv5s99txxceq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bijo1z8gv5s99txxceq.png" alt="GitHub PAT Secret" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the secret,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb5cu0a0bg5fk51w1o7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqb5cu0a0bg5fk51w1o7z.png" alt="Project Secrets" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Connector
&lt;/h4&gt;

&lt;p&gt;As we need to clone the sources from GitHub, we need to define a &lt;strong&gt;GitHub Connector&lt;/strong&gt;,  from the &lt;strong&gt;Project Setup&lt;/strong&gt; click &lt;strong&gt;Connectors&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgra2qdl6a7x66btz5jqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgra2qdl6a7x66btz5jqk.png" alt="New Connector" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From connector list select &lt;strong&gt;GitHub&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktjszxdgilu44rn36my0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fktjszxdgilu44rn36my0.png" alt="New GitHub Connector" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter the name as &lt;strong&gt;GitHub&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdlxl7h6wm4aa724bi8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdlxl7h6wm4aa724bi8l.png" alt="GitHub Connector Overview" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt; to enter the connector details,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx13z7pzaslkj14uznd6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx13z7pzaslkj14uznd6g.png" alt="GitHub Connector Details" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt; and update the GitHub Connector credentials,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnx9y4cezd1899j4dl1qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnx9y4cezd1899j4dl1qm.png" alt="GitHub Connector Credentials" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When selecting the &lt;strong&gt;Personal Access Token&lt;/strong&gt; make sure you select the &lt;code&gt;GitHub PAT&lt;/code&gt; that we defined in previous section,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyf7wqipnwk9dx7zafpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiyf7wqipnwk9dx7zafpy.png" alt="GitHub PAT Secret" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt; and use select &lt;strong&gt;Connect through Harness Platform&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo99912vww36fbleygp9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo99912vww36fbleygp9d.png" alt="Connect through Harness Platform" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save and Continue&lt;/strong&gt; to run the connection test, if all went well the connection should successful,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2090ay6wtd98l1yb78kz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2090ay6wtd98l1yb78kz.png" alt="GH Connection Success" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Cloud Service Account Secret
&lt;/h2&gt;

&lt;p&gt;We need Google Service Account(GSA) credentials(JSON Key) to query the GKE cluster details and create resources on it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Set environment
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GCP_PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the Google Cloud Project where Kubernetes Cluster is created"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GSA_KEY_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"path where to store the key file"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create SA
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts create gke-user &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"GKE User"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--display-name&lt;/span&gt; &lt;span class="s2"&gt;"gke-user"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  IAM Binding
&lt;/h3&gt;

&lt;p&gt;Add permissions to the user to be able to provision kubernetes resources,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud projects add-iam-policy-binding &lt;span class="nv"&gt;$GCP_PROJECT&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--member&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"serviceAccount:&lt;/span&gt;&lt;span class="nv"&gt;$GSA_NAME&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$GCP_PROJECT&lt;/span&gt;&lt;span class="s2"&gt;.iam.gserviceaccount.com"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"roles/container.admin"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download And Save GSA Key
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;IMPORTANT: Only security admins can create the JSON keys. Ensure the Google Cloud user you are using has &lt;strong&gt;Security Admin&lt;/strong&gt; role.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud iam service-accounts keys create &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GSA_KEY_FILE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--iam-account&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gke-user@&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GCP_PROJECT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.iam.gserviceaccount.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  GSA Secret
&lt;/h3&gt;

&lt;p&gt;Get back to the &lt;strong&gt;Project Setup&lt;/strong&gt; click &lt;strong&gt;Secrets&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jbw4oldn0x2729ypy9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2jbw4oldn0x2729ypy9g.png" alt="New File Secret" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the GSA secret details as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecn7fe4taun68wbzhhn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecn7fe4taun68wbzhhn2.png" alt="GSA Secret Details" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: When you browse and select make sure you select the &lt;code&gt;$GSA_KEY_FILE&lt;/code&gt; as the file for the secret.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the secret,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy69kufxvkhg8ia8qxs0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy69kufxvkhg8ia8qxs0q.png" alt="Project Secrets" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Workspace
&lt;/h2&gt;

&lt;p&gt;On your terraform cloud account create a new workspace called &lt;strong&gt;vanilla-gke&lt;/strong&gt;. Update the workspace settings to use Version Control and make it point to $TFC_GKE_REPO.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuifarjnq1vxpf1h9dhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftuifarjnq1vxpf1h9dhd.png" alt="TFC Workspace VCS" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure the workspace with following variables,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxld08inhuftsu4r87l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxld08inhuftsu4r87l1.png" alt="TFC Workspace Variables" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details on available variables, check &lt;a href="https://github.com/harness-apps/vanilla-gke#inputs" rel="noopener noreferrer"&gt;Terraform Inputs&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: The &lt;code&gt;GOOGLE_CREDENTIALS&lt;/code&gt; is Google Service Account JSON Key with permissions to create GKE cluster. Please check the &lt;a href="https://github.com/harness-apps/vanilla-gke#pre-requisites" rel="noopener noreferrer"&gt;https://github.com/harness-apps/vanilla-gke#pre-requisites&lt;/a&gt; for the required roles and permissions. This key will be used by Terraform to create the GKE cluster. When you add the key to terraform variables, you need to make it as base64 encoded e.g. &lt;code&gt;cat YOUR_GOOGLE_CREDENTIALS_KEY_FILE | tr -d \\n&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Going forward we will refer to the Terraform Workspace as &lt;code&gt;$TF_WORKSPACE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Lookup your terraform cloud organizations&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9txo4xjf5wukxi9pfp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9txo4xjf5wukxi9pfp3.png" alt="TFC Cloud Organization" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And set it's value to the variable &lt;code&gt;$TF_CLOUD_ORGANIZATION&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create need Terraform API Token that can be used to pull the outputs of terraform run(cloud). From your terraform user settings &lt;strong&gt;Create an API token&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob5t70rajo04jz5arjyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob5t70rajo04jz5arjyd.png" alt="Terraform API Token" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And save the API token to  the variable &lt;code&gt;$TF_TOKEN_app_terraform_io&lt;/code&gt;. We will use this variable in CI pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harness CI Pipeline
&lt;/h2&gt;

&lt;p&gt;Getting back to Harness web console, navigate to your project &lt;strong&gt;terraform_integration_demos&lt;/strong&gt;, click &lt;strong&gt;Pipelines&lt;/strong&gt; and &lt;strong&gt;Create a Pipeline&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Import From Git&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrnqdphbjmgjsn4jaaia.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrnqdphbjmgjsn4jaaia.png" alt="New CI Pipeline Import" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update the pipeline details as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cjbfq9n9akx1xcu21vl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9cjbfq9n9akx1xcu21vl.png" alt="Pipeline Details" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: Make sure the &lt;strong&gt;Name&lt;/strong&gt; of the pipeline is &lt;code&gt;bootstrap argocd pipeline&lt;/code&gt; to make the import succeed with defaults.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrr4uqouwsuwrw6ascw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrr4uqouwsuwrw6ascw9.png" alt="Pipeline Import Successful" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;code&gt;bootstrap argocd pipeline&lt;/code&gt; from the list to open the &lt;strong&gt;Pipeline Studio&lt;/strong&gt; and click on the stage &lt;strong&gt;Bootstrap Argo CD&lt;/strong&gt; to bring up the pipeline steps,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtazl7vzrl6n7r5y72ar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtazl7vzrl6n7r5y72ar.png" alt="Pipeline Steps" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can click on each step to see the details.&lt;/p&gt;

&lt;p&gt;The Pipeline uses the following secrets,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;google_application_credentials&lt;/code&gt; - the GSA credentials to manipulate GKE&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform_cloud_api_token&lt;/code&gt; - the value of &lt;code&gt;$TF_TOKEN_app_terraform_io&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform_workspace&lt;/code&gt; - the value &lt;code&gt;$TF_WORKSPACE&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;terraform_cloud_organization&lt;/code&gt; - the value &lt;code&gt;$TF_CLOUD_ORGANIZATION&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We already added &lt;code&gt;google_application_credentials&lt;/code&gt; secret as part of the earlier section. Following the similar pattern let us add the &lt;code&gt;terraform_cloud_api_token&lt;/code&gt;, &lt;code&gt;terraform_workspace&lt;/code&gt; and &lt;code&gt;terraform_cloud_organization&lt;/code&gt; as text secrets.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;HINT&lt;/strong&gt;:&lt;br&gt;
From the &lt;strong&gt;Project Setup&lt;/strong&gt; click &lt;strong&gt;Secrets&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21zbiyr2gdqjt4j0wrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd21zbiyr2gdqjt4j0wrl.png" alt="New Text Secret" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslblnj5atbig84djnat8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslblnj5atbig84djnat8.png" alt="all terraform secrets" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TIP&lt;/strong&gt;: You can also skip adding &lt;code&gt;terraform_workspace&lt;/code&gt; and &lt;code&gt;terraform_cloud_organization&lt;/code&gt;, we can extract the values from the webhook payload using the expressions &lt;code&gt;&amp;lt;+trigger.payload.workspace_name&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;+trigger.payload.organization_name&amp;gt;&lt;/code&gt; respectively.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Notification Trigger
&lt;/h2&gt;

&lt;p&gt;For the Harness CI pipelines to listen to Terraform Cloud Events we need to define a &lt;strong&gt;Trigger&lt;/strong&gt;, navigate back to pipelines and select the &lt;strong&gt;bootstrap argocd pipeline&lt;/strong&gt;  --&amp;gt; &lt;strong&gt;Triggers&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi19na36updy8iw0rbcx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi19na36updy8iw0rbcx.png" alt="Pipeline Triggers" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add New Trigger&lt;/strong&gt; to add a new webhook trigger(Type: &lt;code&gt;Custom&lt;/code&gt;),&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51m5842vy49r8j6v09mg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51m5842vy49r8j6v09mg.png" alt="Custom Webhook Trigger" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;Configuration&lt;/strong&gt; page enter the name of the trigger to be &lt;code&gt;tfc notification&lt;/code&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7w3tjzgrnfjh7thfs7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7w3tjzgrnfjh7thfs7y.png" alt="TFC Notification Config" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave rest of the fields to defaults and click &lt;strong&gt;Continue&lt;/strong&gt;, leave the &lt;strong&gt;Conditions&lt;/strong&gt; to defaults and click &lt;strong&gt;Continue&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;Pipeline Input&lt;/strong&gt; update the &lt;strong&gt;Pipeline Reference Branch&lt;/strong&gt; to be set to &lt;strong&gt;main&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqc6h0c3zolmsk1e958v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdqc6h0c3zolmsk1e958v.png" alt="Pipeline Input" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The &lt;strong&gt;Pipeline Reference Branch&lt;/strong&gt; does not have any implication with this demo as we do manual clone of resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Create Trigger&lt;/strong&gt; to create and save the trigger.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb65schvss82ze38zeil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb65schvss82ze38zeil.png" alt="Trigger List" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Copy Webhook URL
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawr2qr1rbvl3sp96ng6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawr2qr1rbvl3sp96ng6f.png" alt="Webhook URL" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us refer to this value as &lt;code&gt;$TRIGGER_WEBHOOK_URL&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Notification
&lt;/h2&gt;

&lt;p&gt;On your terraform cloud console navigate to the workspace &lt;strong&gt;Settings&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Notifications&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dz8cfedh4l15eti8wyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dz8cfedh4l15eti8wyi.png" alt="TFC Notifications" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Notification&lt;/strong&gt; and select &lt;strong&gt;Webhook&lt;/strong&gt; as the &lt;strong&gt;Destination&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkuzrz1ooegbjhvan0ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkuzrz1ooegbjhvan0ur.png" alt="Webhook" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Update the notification details as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frthp02u6nqt1elqkya0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frthp02u6nqt1elqkya0x.png" alt="TFC Webhook Details" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we need to bootstrap argo CD only on create events we set the triggers to happen only on &lt;strong&gt;Completed&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ztmiebchh4ybecx6pid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ztmiebchh4ybecx6pid.png" alt="Trigger Events" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Notification&lt;/strong&gt; to finish the creation of notification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z8yl21d2uffup5u4ir5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z8yl21d2uffup5u4ir5.png" alt="TFC Webhook Creation Success" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The creation would have fired a notification, if the cluster is not ready yet the pipeline would have failed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Congratulations!!&lt;/strong&gt;. With this setup any new or updates thats done to the &lt;code&gt;$TFC_GKE_REPO&lt;/code&gt; will trigger a plan and apply on Terraform Cloud. A &lt;strong&gt;Completed&lt;/strong&gt; plan will trigger the &lt;code&gt;bootstrap argocd pipline&lt;/code&gt; to run and apply the manifests from &lt;code&gt;$BOOTSTRAP_ARGOCD_REPO&lt;/code&gt; on the GKE cluster.&lt;/p&gt;

&lt;p&gt;An example of successful pipeline run&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mxvq8tmtvi6hlhmzsd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mxvq8tmtvi6hlhmzsd6.png" alt="Pipeline Success" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By using the terraform notifications feature we were able to make the CI pipelines listen to IaC events and run the CI pipelines as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa56swhyp5lzm9n5d9kpr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa56swhyp5lzm9n5d9kpr.png" alt="Notification Pattern" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using Workload Identity Continuous Integration(CI) Pipelines</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Thu, 23 Mar 2023 05:22:53 +0000</pubDate>
      <link>https://dev.to/kameshsampath/using-workload-identity-continuous-integrationci-pipelines-1dh5</link>
      <guid>https://dev.to/kameshsampath/using-workload-identity-continuous-integrationci-pipelines-1dh5</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/kameshsampath/what-is-workload-identity--120l"&gt;first part&lt;/a&gt; of this series we understood what a &lt;strong&gt;Workload Identity&lt;/strong&gt; is and in the &lt;a href="https://dev.to/kameshsampath/applying-workload-identity-with-a-demo-1bf9"&gt;second part&lt;/a&gt; how it can help in doing keyless Google API invocations by deploying a demo application. In this blog we learn how to use  &lt;strong&gt;Workload Identity&lt;/strong&gt; with SaaS Continuous Integration(&lt;strong&gt;CI&lt;/strong&gt;) providers.&lt;/p&gt;

&lt;p&gt;Many SaaS Continuous Integration(&lt;strong&gt;CI&lt;/strong&gt;) providers e.g &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-wi-delegate&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt; uses what is called a &lt;a href="https://developer.harness.io/docs/platform/Delegates/get-started-with-delegates/delegates-overview" rel="noopener noreferrer"&gt;&lt;strong&gt;Delegate&lt;/strong&gt;&lt;/a&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Harness Delegate is a service you run in your local laptop or on Cloud to connect your artifacts, infrastructure, collaboration, verification and other providers, with Harness Manager.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here are some advantages of using Delegates,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Total control of source code, as they can be within organisations cloud infrastructure&lt;/li&gt;
&lt;li&gt;Cloud Cost Optimisations as CI pipelines can leverage organisations existing cloud infrastructure&lt;/li&gt;
&lt;li&gt;Leverage the native cloud services from CI pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this tutorial we will deploy a Harness Delegate on to GKE and understand how enabling Workload Identity on GKE can simplify the CI Pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI Pipeline UseCase
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Builds &lt;a href="https://go.dev/" rel="noopener noreferrer"&gt;go&lt;/a&gt; application, ideally you can build any application but go is taken as an example here.&lt;/li&gt;
&lt;li&gt;Package application build artifact as a container image&lt;/li&gt;
&lt;li&gt;Push the image to &lt;a href="https://cloud.google.com/artifact-registry/" rel="noopener noreferrer"&gt;Google Artifact Registry(&lt;strong&gt;GAR&lt;/strong&gt;)&lt;/a&gt;. &lt;/li&gt;
&lt;li&gt;Cache the build artifacts, dependencies(go modules) on to &lt;a href="https://cloud.google.com/storage/" rel="noopener noreferrer"&gt;Google Cloud Storage(&lt;strong&gt;GCS&lt;/strong&gt;)&lt;/a&gt; to make build process faster and quicker.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;Before we get to tutorial make sure you have signed up for free tier &lt;a href="https://app.harness.io/auth/#/signup/?module=ci&amp;amp;utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-wi-delegate&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt; account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A &lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud Account&lt;/a&gt; with a Service Account with roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Kubernetes Engine Admin&lt;/code&gt; - to create GKE cluster&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Service Account&lt;/code&gt; roles used to create/update/delete Service Account

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.actAs&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.get&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.create&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.delete&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.update&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.get&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.getIamPolicy&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.setIamPolicy&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;(OR) simply you can add &lt;code&gt;Service Account Admin&lt;/code&gt; and &lt;code&gt;Service Account User&lt;/code&gt; roles&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;Compute Network Admin&lt;/code&gt;   - to create the VPC networks&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.google.com/sdk" rel="noopener noreferrer"&gt;Google Cloud SDK&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://terraform.build" rel="noopener noreferrer"&gt;terraform&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://helm.sh" rel="noopener noreferrer"&gt;helm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;a href="https://taskfile.dev" rel="noopener noreferrer"&gt;Taskfile&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Download Sources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/harness-apps/workload-identity-gke-demo.git &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$_&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEMO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment Setup
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Variables
&lt;/h3&gt;

&lt;p&gt;When working with Google Cloud the following environment variables helps in setting the right Google Cloud context like Service Account Key file, project etc., You can use &lt;a href="https://direnv.net" rel="noopener noreferrer"&gt;direnv&lt;/a&gt; or set the following variables on your shell,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud service account key json file to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLOUDSDK_ACTIVE_CONFIG_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud cli profile to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_CLOUD_PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud project to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/.kube/config"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more information about gcloud cli configurations at &lt;a href="https://cloud.google.com/sdk/docs/configurations" rel="noopener noreferrer"&gt;https://cloud.google.com/sdk/docs/configurations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you may need to override few terraform variables that you don't want to check in to VCS, add them to a file called &lt;code&gt;.local.tfvars&lt;/code&gt; and set the following environment variable to be picked up by terraform runs,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TFVARS_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.local.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the &lt;a href="https://github.com/harness-apps/workload-identity-gke-demo#inputs" rel="noopener noreferrer"&gt;Inputs&lt;/a&gt; section for all possible terraform variables that are configurable.&lt;/p&gt;

&lt;p&gt;An example &lt;code&gt;.local.tfvars&lt;/code&gt; looks like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;project_id&lt;/span&gt;                 &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-awesome-gcp-project"&lt;/span&gt;
&lt;span class="nx"&gt;region&lt;/span&gt;                     &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"asia-south1"&lt;/span&gt;
&lt;span class="nx"&gt;cluster_name&lt;/span&gt;               &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"wi-demos"&lt;/span&gt;
&lt;span class="nx"&gt;kubernetes_version&lt;/span&gt;         &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.24."&lt;/span&gt;
&lt;span class="nx"&gt;harness_account_id&lt;/span&gt;         &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"REPLACE WITH YOUR HARNESS ACCOUNT ID"&lt;/span&gt;
&lt;span class="nx"&gt;harness_delegate_token&lt;/span&gt;     &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"REPLACE WITH YOUR HARNESS DELEGATE TOKEN"&lt;/span&gt;
&lt;span class="nx"&gt;harness_delegate_name&lt;/span&gt;      &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"wi-demos-delegate"&lt;/span&gt;
&lt;span class="nx"&gt;harness_delegate_namespace&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"harness-delegate-ng"&lt;/span&gt;
&lt;span class="nx"&gt;harness_manager_endpoint&lt;/span&gt;   &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://app.harness.io/gratis"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create Environment
&lt;/h3&gt;

&lt;p&gt;We will use terraform to create a GKE cluster with &lt;code&gt;WorkloadIdentity&lt;/code&gt; enabled for its nodes,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create GKE cluster
&lt;/h3&gt;

&lt;p&gt;The terraform apply will creates a GKE Cluster,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task create_cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploy Harness Delegate
&lt;/h3&gt;

&lt;p&gt;The following section deploys a Harness Delegate on to our GKE cluster. To be able to successfully deploy a Harness Delegate we need update the following values in our &lt;code&gt;.local.tfvars&lt;/code&gt; file,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;harness_account_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;harness_delegate_token&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;harness_delegate_namespace&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;harness_manager_endpoint&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use &lt;strong&gt;Account Id&lt;/strong&gt; from Account Overview as the value for &lt;strong&gt;harness_account_id&lt;/strong&gt;,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feom3g1bo25v0rc1iehjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feom3g1bo25v0rc1iehjl.png" alt="account details" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the &lt;strong&gt;Harness Cluster Hosting Account&lt;/strong&gt; from the account details to find the matching endpoint URL. e.g for &lt;code&gt;prod-2&lt;/code&gt; it is &lt;a href="https://app.harness.io/gratis" rel="noopener noreferrer"&gt;https://app.harness.io/gratis&lt;/a&gt; and set that as value for &lt;code&gt;harness_manager_endpoint&lt;/code&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;TIP: &lt;br&gt;
You can find the endpoint corresponding to your &lt;strong&gt;Harness Cluster Hosting Account&lt;/strong&gt; from &lt;a href="https://developer.harness.io/tutorials/platform/install-delegate/" rel="noopener noreferrer"&gt;https://developer.harness.io/tutorials/platform/install-delegate/&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Copy the default token from &lt;strong&gt;Projects&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Project Setup&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Delegates&lt;/strong&gt;(&lt;strong&gt;Tokens&lt;/strong&gt;) and set it as value for &lt;code&gt;harness_delegate_token&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0y1vxdxunxal3li1xug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0y1vxdxunxal3li1xug.png" alt="copy default token" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;harness_delegate_name&lt;/code&gt;: defaults to &lt;strong&gt;harness-delegate&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;harness_delegate_namespace&lt;/code&gt;: defaults to &lt;strong&gt;harness-delegate-ng&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With us having updated the &lt;code&gt;.local.tfvars&lt;/code&gt;, run the following command to deploy the Harness Delegate,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task deploy_harness_delegate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: It will take some time for delegate to come up and get connected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Wait for the delegate to be connected before proceeding to next steps. &lt;/p&gt;

&lt;p&gt;You can view status of the delegate from the &lt;strong&gt;Project&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Project Setup&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Delegates&lt;/strong&gt; page,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrjd9xd1k3r2pa6g35c3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrjd9xd1k3r2pa6g35c3.png" alt="delegate status" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check the running Harness delegate pods by using &lt;code&gt;kubectl&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; harness-delegate-ng    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should be something like the pod name may vary based on your &lt;code&gt;harness_delegate_name&lt;/code&gt; value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                 READY   STATUS    RESTARTS   AGE
harness-delegate-6bfd78d5cb-5h8x9   1/1     Running   0          2m23s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Build Application
&lt;/h3&gt;

&lt;p&gt;Having deployed the Harness delegate, let us build a CI pipeline that will build and push the same &lt;a href="https://github.com/harness-apps/workload-identity-gke-demo/tree/main/app" rel="noopener noreferrer"&gt;go app&lt;/a&gt; to GAR.&lt;/p&gt;

&lt;h4&gt;
  
  
  Import Template
&lt;/h4&gt;

&lt;p&gt;The sources already has &lt;a href="https://github.com/harness-apps/workload-identity-gke-demo/blob/main/.harness/ko_gar_build_push_1.yaml" rel="noopener noreferrer"&gt;build stage&lt;/a&gt; template that can be used to create the CI pipeline.&lt;/p&gt;

&lt;p&gt;Navigate to your Harness Account, &lt;strong&gt;Account Overview&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Organizations&lt;/strong&gt; and select &lt;strong&gt;default&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj07f38navg5c3udfjwan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj07f38navg5c3udfjwan.png" alt="default org select" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the Organization overview page select &lt;strong&gt;Templates&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nii2f5wqwsr8lkvo5l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2nii2f5wqwsr8lkvo5l1.png" alt="templates select" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;New Template&lt;/strong&gt; and choose &lt;strong&gt;Import From Git&lt;/strong&gt; option,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme4lm1hzsqhuygwoid4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme4lm1hzsqhuygwoid4t.png" alt="import from git" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill the wizard with values as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiff39t3r5tmaxr1fcrem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiff39t3r5tmaxr1fcrem.png" alt="import from git details" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If you want to use your fork of &lt;code&gt;harness-apps/workload-identity-gke-demo&lt;/code&gt; then update &lt;em&gt;Repository&lt;/em&gt; with your fork.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0grrjx3t1mq74xje7tg0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0grrjx3t1mq74xje7tg0.png" alt="import template successful" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Pipeline
&lt;/h2&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Builds&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Pipelines&lt;/strong&gt;, click &lt;strong&gt;Create Pipeline&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5cyqbs2xb43l9uzyt2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5cyqbs2xb43l9uzyt2f.png" alt="create pipeline" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add Stage&lt;/strong&gt; and click &lt;strong&gt;Use template&lt;/strong&gt;, choose &lt;strong&gt;ko_gar_build_push&lt;/strong&gt; template that we imported earlier and click &lt;strong&gt;Use template&lt;/strong&gt; to complete import.&lt;/p&gt;

&lt;p&gt;Enter details about the stage,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma34ba7s3dqw0y8qqlm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma34ba7s3dqw0y8qqlm1.png" alt="stage details" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Setup Stage&lt;/strong&gt; to create the stage and fill other details i.e &lt;strong&gt;Template Inputs&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6crm2c5ygs72juades6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6crm2c5ygs72juades6.png" alt="template inputs" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;default&lt;/code&gt; namespace to run builder pods. The build pod runs with a Kubernetes Service Account(KSA) &lt;code&gt;harness-builder&lt;/code&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:&lt;br&gt;
The &lt;code&gt;harness-builder&lt;/code&gt; KSA is mapped to Google IAM Service Account(GSA) &lt;code&gt;harness-delegate&lt;/code&gt; to inherit the GCP roles using Workload Identity in this case to push the images to Google Artifact Registry(GAR).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Run&lt;/strong&gt; to run the pipeline to see the image build and pushed to GAR,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbidxfctui4b3zmmoyuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbidxfctui4b3zmmoyuv.png" alt="Run Pipeline" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As successful run would have pushed the image into GAR in this example its &lt;code&gt;asia-south1-docker.pkg.dev/pratyakshika/demos/lingua-greeter:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlo8z6klt1qtfidzgn8h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlo8z6klt1qtfidzgn8h.png" alt="Build Success" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;To clean up all the Google Cloud resources that were created as part of this demo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By using Workload Identity Delegate we have simplified and secured our CI pipelines which can now use any Google API services by configuring the GSA with right roles/permissions. The CI SaaS platform no longer need to store/update the Google API credentials.&lt;/p&gt;

&lt;p&gt;Having deployed Workload Identity Delegate you can also do &lt;a href="https://docs.sigstore.dev/cosign/sign/#keyless-signing" rel="noopener noreferrer"&gt;keyless signing&lt;/a&gt; of your container images using Google Application Credentials. For more info check &lt;a href="https://sigstore.dev" rel="noopener noreferrer"&gt;cosign&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For more tutorials and documentation please visit &lt;a href="https://developer.harness.io" rel="noopener noreferrer"&gt;https://developer.harness.io&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Applying Workload Identity With A Demo</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Mon, 13 Mar 2023 04:20:19 +0000</pubDate>
      <link>https://dev.to/kameshsampath/applying-workload-identity-with-a-demo-1bf9</link>
      <guid>https://dev.to/kameshsampath/applying-workload-identity-with-a-demo-1bf9</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;As part of the &lt;a href="https://dev.to/kameshsampath/what-is-workload-identity--120l"&gt;first part&lt;/a&gt; of the series we understood what is &lt;strong&gt;Workload Identity&lt;/strong&gt;. In this DIY blog we will apply &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity" rel="noopener noreferrer"&gt;Workload Identity&lt;/a&gt; to our GKE workloads by deploying a demo application called &lt;strong&gt;lingua-greeter&lt;/strong&gt; to GKE, the lingua-greeter will call &lt;a href="https://cloud.google.com/translate" rel="noopener noreferrer"&gt;Translate API&lt;/a&gt; to translate the greeting text passed to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Abbreviation&lt;/th&gt;
&lt;th&gt;Expansion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;API&lt;/td&gt;
&lt;td&gt;Application Programming Interface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACL&lt;/td&gt;
&lt;td&gt;Access Control List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCS&lt;/td&gt;
&lt;td&gt;Google Cloud Storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GKE&lt;/td&gt;
&lt;td&gt;Google Kubernetes Engine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GSA&lt;/td&gt;
&lt;td&gt;Google Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IAM&lt;/td&gt;
&lt;td&gt;Identity and Access Management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KSA&lt;/td&gt;
&lt;td&gt;Kubernetes Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RBAC&lt;/td&gt;
&lt;td&gt;Role Based Access Control&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SA&lt;/td&gt;
&lt;td&gt;Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VPC&lt;/td&gt;
&lt;td&gt;Virtual Private Cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;With a &lt;a href="https://cloud.google.com/iam/docs/service-account-overview" rel="noopener noreferrer"&gt;Service Account&lt;/a&gt; with roles:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Kubernetes Engine Admin&lt;/code&gt; - to create GKE cluster&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Service Account&lt;/code&gt; roles used to create/update/delete Service Account&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.actAs&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.get&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.create&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.delete&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.update&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.get&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;iam.serviceAccounts.getIamPolicy&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;iam.serviceAccounts.setIamPolicy&lt;/em&gt;
Or simply you can add &lt;code&gt;Service Account Admin&lt;/code&gt; and &lt;code&gt;Service Account User&lt;/code&gt; roles&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Compute Network Admin&lt;/code&gt;   - to create the VPC networks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Enable &lt;a href="https://console.cloud.google.com/apis/library/translate.googleapis.com" rel="noopener noreferrer"&gt;Translation API&lt;/a&gt; on your Google Cloud Account&lt;/li&gt;

&lt;li&gt;&lt;a href="https://cloud.google.com/sdk" rel="noopener noreferrer"&gt;Google Cloud SDK&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://terraform.build" rel="noopener noreferrer"&gt;terraform&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;&lt;/li&gt;

&lt;li&gt;&lt;a href="https://taskfile.dev" rel="noopener noreferrer"&gt;Taskfile&lt;/a&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Optional
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kustomize.io" rel="noopener noreferrer"&gt;kustomize&lt;/a&gt;(Optional)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://direnv.net" rel="noopener noreferrer"&gt;direnv&lt;/a&gt;(Optional)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Download Sources
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kameshsampath/workload-identiy-gke-demo &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$_&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEMO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Environment Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Variables
&lt;/h3&gt;

&lt;p&gt;When working with Google Cloud the following environment variables helps in setting the right Google Cloud context like Service Account Key file, project etc., You can use &lt;a href="https://direnv.net" rel="noopener noreferrer"&gt;direnv&lt;/a&gt; or set the following variables on your shell,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud service account key json file to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLOUDSDK_ACTIVE_CONFIG_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud cli profile to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_CLOUD_PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"the google cloud project to use"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/.kube"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(e.g.)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLOUDSDK_ACTIVE_CONFIG_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;personal
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;~/.ssh/my-sa-key.json
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GOOGLE_CLOUD_PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-awesome-project
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/.kube"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TIP&lt;/strong&gt; If you are using direnv you can then create file &lt;code&gt;.envrc.local&lt;/code&gt; and add the environment variables. They can then be loaded using &lt;code&gt;direnv allow .&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can find more information about gcloud cli configurations at &lt;a href="https://cloud.google.com/sdk/docs/configurations" rel="noopener noreferrer"&gt;https://cloud.google.com/sdk/docs/configurations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We will be using &lt;a href="https://terraform.build" rel="noopener noreferrer"&gt;terraform&lt;/a&gt; to create the Google Cloud resources e.g. GKE Cluster with &lt;strong&gt;Workload Identity&lt;/strong&gt; enabled, Google Service Accounts(GSA), &lt;a href="https://cloud.google.com/iam/docs/overview" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; policies and bindings.&lt;/p&gt;

&lt;p&gt;As you may need to override few terraform variables that you don't want to check in to VCS, add them to a file called &lt;code&gt;.local.tfvars&lt;/code&gt;. Set the following environment variable to make terraform use the variable values form the file &lt;code&gt;.local.tfvars&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TFVARS_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.local.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the &lt;a href="https://github.com/kameshsampath/workload-identity-gke-demo#inputs" rel="noopener noreferrer"&gt;Inputs&lt;/a&gt; section for all possible terraform variables that are configurable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;An example &lt;code&gt;.local.tfvars&lt;/code&gt; that will use a Google Cloud project &lt;strong&gt;my-awesome-project&lt;/strong&gt;, create a two node GKE cluster named &lt;strong&gt;wi-demo&lt;/strong&gt; in region &lt;strong&gt;asia-south1&lt;/strong&gt; with Kubernetes version &lt;strong&gt;1.24.&lt;/strong&gt; from &lt;strong&gt;stable&lt;/strong&gt; release channel. The machine type of each cluster node will be &lt;strong&gt;e2-standard-4&lt;/strong&gt;. The demo will be deployed in Kubernetes namespace &lt;strong&gt;demo-apps&lt;/strong&gt;, will use &lt;strong&gt;lingua-greeter&lt;/strong&gt; as the Kubernetes Service Account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;app_ksa&lt;/span&gt;            &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lingua-greeter"&lt;/span&gt;
&lt;span class="nx"&gt;app_namespace&lt;/span&gt;      &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"demo-apps"&lt;/span&gt;
&lt;span class="nx"&gt;cluster_name&lt;/span&gt;       &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"wi-demo"&lt;/span&gt;
&lt;span class="nx"&gt;configure_app_workload_identity&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="nx"&gt;gke_num_nodes&lt;/span&gt;      &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="nx"&gt;kubernetes_version&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.24."&lt;/span&gt;
&lt;span class="nx"&gt;machine_type&lt;/span&gt;       &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"e2-standard-4"&lt;/span&gt;
&lt;span class="nx"&gt;project_id&lt;/span&gt;         &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-awesome-project"&lt;/span&gt;
&lt;span class="nx"&gt;region&lt;/span&gt;             &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"asia-south1"&lt;/span&gt;
&lt;span class="nx"&gt;release_channel&lt;/span&gt;    &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"stable"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: For rest of the section we assume that your tfvars file is called &lt;code&gt;.local.tfvars&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Application Overview
&lt;/h2&gt;

&lt;p&gt;As part of the demo, let us deploy a Kubernetes application called &lt;code&gt;lingua-greeter&lt;/code&gt;. The application exposes a REST API &lt;code&gt;/:lang&lt;/code&gt; , that allows you to translate a text &lt;code&gt;Hello World!&lt;/code&gt; into the language &lt;code&gt;:lang&lt;/code&gt; using Google Translate client.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The &lt;code&gt;:lang&lt;/code&gt; is the &lt;a href="https://en.wikipedia.org/wiki/IETF_language_tag" rel="noopener noreferrer"&gt;BCP 47&lt;/a&gt; language code.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwxe3acyxlgloo8b17rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwxe3acyxlgloo8b17rq.png" alt="Apply Workload Identity" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Environment
&lt;/h2&gt;

&lt;p&gt;We will use terraform to create a GKE cluster with &lt;code&gt;WorkloadIdentity&lt;/code&gt; enabled for its nodes,&lt;/p&gt;

&lt;p&gt;Initialize terraform and download its modules,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create GKE cluster
&lt;/h3&gt;

&lt;p&gt;The terraform apply will creates a Kubernetes(GKE) Cluster,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task create_cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform apply will create the following Google Cloud resources,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Kubernetes cluster on GKE&lt;/li&gt;
&lt;li&gt;A Google Cloud VPC that will be used with GKE&lt;/li&gt;
&lt;li&gt;GKE is configured to use &lt;a href="https://github.com/kameshsampath/workload-identity-gke-demo/blob/main/gke.tf#L41" rel="noopener noreferrer"&gt;Workload Identity Pool&lt;/a&gt;. As refresher check &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#how_works" rel="noopener noreferrer"&gt;How Workload Identity Works&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy Application
&lt;/h2&gt;

&lt;p&gt;To see &lt;strong&gt;Workload Identity&lt;/strong&gt; in action we will deploy the application(workload) on to GKE in two parts,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application is &lt;strong&gt;not&lt;/strong&gt; enabled for Workload Identity&lt;/li&gt;
&lt;li&gt;Application &lt;strong&gt;enabled&lt;/strong&gt; for Workload Identity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Without Workload Identity Enabled
&lt;/h3&gt;

&lt;p&gt;Create the namespace &lt;code&gt;demo-apps&lt;/code&gt; to deploy the &lt;code&gt;lingua-greeter&lt;/code&gt; application,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create ns demo-apps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to deploy the application,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps &lt;span class="nt"&gt;-k&lt;/span&gt; &lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;/app/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for application to be ready,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps deployment/lingua-greeter &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;60s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get the application service LoadBalancer IP,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps lingua-greeter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: If the &lt;code&gt;EXTERNAL-IP&lt;/code&gt; is &lt;code&gt;&amp;lt;pending&amp;gt;&lt;/code&gt; then wait for the IP to be assigned. It will take few minutes for the &lt;code&gt;EXTERNAL-IP&lt;/code&gt; to be assigned.&lt;br&gt;
You can use the following command to wait until &lt;code&gt;External-IP&lt;/code&gt; is assigned,&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps lingua-greeter &lt;span class="nt"&gt;-ojsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.status.loadBalancer.ingress[*].ip}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;sleep&lt;/span&gt; .3&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Call Service
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SERVICE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps lingua-greeter &lt;span class="nt"&gt;-ojsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.status.loadBalancer.ingress[*].ip}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call the service to return the translation of &lt;code&gt;Hello World!&lt;/code&gt; in &lt;a href="https://en.wikipedia.org/wiki/Tamil_language" rel="noopener noreferrer"&gt;Tamil(ta)&lt;/a&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="nv"&gt;$SERVICE_IP&lt;/span&gt;&lt;span class="s2"&gt;/ta"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service should fail with a message,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message":"Internal Server Error"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you check the logs of the &lt;code&gt;lingua-greeter&lt;/code&gt; pod,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps &lt;span class="nt"&gt;-lapp&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lingua-greeter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a message like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ____    __
  / __/___/ /  ___
 / _// __/ _ \/ _ \
/___/\__/_//_/\___/ v4.10.0
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
                                    O\
⇨ http server started on [::]:8080
time="2023-03-10T07:36:35Z" level=error msg="googleapi: Error 401: Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.\nMore details:\nReason: authError, Message: Invalid Credentials\n"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As it describes you don't have authentication credentials to call the API. All Google Cloud API requires &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; to allow client to authenticate itself before calling the API. If you check the &lt;a href="//./../app/config/deployment.yaml"&gt;deployment manifest&lt;/a&gt; we dont have one configured.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Application To Use Workload Identity
&lt;/h3&gt;

&lt;p&gt;Run the following command to configure the application to use &lt;strong&gt;Workload Identity&lt;/strong&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task use_workload_identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The terraform script &lt;a href="https://github.com/kameshsampath/workload-identity-gke-demo/blob/main/rbac.tf" rel="noopener noreferrer"&gt;rbac.tf&lt;/a&gt; does the following,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a GSA called &lt;code&gt;translator&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Add role &lt;code&gt;roles/iam.workloadIdentityUser&lt;/code&gt; to &lt;code&gt;translator&lt;/code&gt; with a single member &lt;code&gt;serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[demo-apps/lingua-greeter]&lt;/code&gt;. This basically allows KSA to impersonate itself as GSA &lt;code&gt;translator&lt;/code&gt; thereby allowing it to call Google Cloud services that are allowed for GSA &lt;code&gt;translator&lt;/code&gt;, in this case to use Google Translate API.&lt;/li&gt;
&lt;li&gt;Add IAM policy binding to &lt;code&gt;translator&lt;/code&gt; for role &lt;code&gt;roles/cloudtranslate.user&lt;/code&gt; which allows it to call the Google Translate API.&lt;/li&gt;
&lt;li&gt;Finally an updated &lt;code&gt;lingua-greeter&lt;/code&gt; KSA manifest &lt;code&gt;$DEMO_HOME/k8s/sa.yaml&lt;/code&gt;, that is annotated with the GSA &lt;code&gt;client_email&lt;/code&gt; that it is allowed to use, in this case it will be something like &lt;code&gt;translator@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com&lt;/code&gt;.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServiceAccount&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lingua-greeter&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-apps&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;iam.gke.io/gcp-service-account&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;translator@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to update the Kubernetes SA &lt;code&gt;lingua-greeter&lt;/code&gt; to use the Google IAM Service Account(GSA) &lt;code&gt;translator&lt;/code&gt; using &lt;strong&gt;Workload Identity mechanics&lt;/strong&gt; and call the Google Translate API,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; demo-apps &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/k8s/sa.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call the service again, the service should succeed with a response,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Hello World!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"translation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"வணக்கம் உலகம்!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"translationLanguage"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"ta"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: Sometimes it may take few seconds for the pods to refresh the metadata, in such cases try to call the service after few seconds.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;To clean up all the Google Cloud resources that were created as part of this demo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;task destroy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Deploy GKE cluster with Workload Identity enabled&lt;/li&gt;
&lt;li&gt;Deploy &lt;code&gt;lingua-greeter&lt;/code&gt; application to GKE&lt;/li&gt;
&lt;li&gt;Create Google Service Account &lt;code&gt;translator&lt;/code&gt; with permissions to call Google Translate API&lt;/li&gt;
&lt;li&gt;Annotating Kubernetes Service Account &lt;code&gt;lingua-greeter&lt;/code&gt; with Google Service Account &lt;code&gt;translator&lt;/code&gt; allowing it to impersonate Google Service Account.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>cloud</category>
      <category>diy</category>
      <category>terraform</category>
    </item>
    <item>
      <title>What is Workload Identity ?</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Mon, 13 Mar 2023 04:09:15 +0000</pubDate>
      <link>https://dev.to/kameshsampath/what-is-workload-identity--120l</link>
      <guid>https://dev.to/kameshsampath/what-is-workload-identity--120l</guid>
      <description>&lt;p&gt;Awesomeness of using Cloud and its ecosystem is that we don't need to reinvent the wheel, in other words no need to build/deploy services e.g.  database, message queue,AI/ML etc. For any service need the APIs are just a click away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Abbreviation&lt;/th&gt;
&lt;th&gt;Expansion&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;API&lt;/td&gt;
&lt;td&gt;Application Programming Interface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACL&lt;/td&gt;
&lt;td&gt;Access Control List&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GCS&lt;/td&gt;
&lt;td&gt;Google Cloud Storage&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GKE&lt;/td&gt;
&lt;td&gt;Google Kubernetes Engine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GSA&lt;/td&gt;
&lt;td&gt;Google Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IAM&lt;/td&gt;
&lt;td&gt;Identity and Access Management&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;KSA&lt;/td&gt;
&lt;td&gt;Kubernetes Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RBAC&lt;/td&gt;
&lt;td&gt;Role Based Access Control&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SA&lt;/td&gt;
&lt;td&gt;Service Account&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;VPC&lt;/td&gt;
&lt;td&gt;Virtual Private Cloud&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt; provides a rich set of services(API) e.g. &lt;a href="https://cloud.google.com/kubernetes-engine/" rel="noopener noreferrer"&gt;GKE&lt;/a&gt;, &lt;a href="https://cloud.google.com/translate" rel="noopener noreferrer"&gt;Translation&lt;/a&gt;, &lt;a href="https://cloud.google.com/pubsub/docs/overview" rel="noopener noreferrer"&gt;Pub/Sub&lt;/a&gt; etc., which can cater most of the distributed application needs with less or no effort.&lt;/p&gt;

&lt;p&gt;GKE is one of the most commonly used Google Cloud Service that allows you to deploy Cloud Native services swiftly. In many occasions those applications deployed to GKE tend to leverage the services offered by Google Cloud e.g our application need to call Google Cloud Translation service, publish a message to Pub/Sub topic or upload/download files from &lt;a href="https://cloud.google.com/storage/" rel="noopener noreferrer"&gt;GCS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By virtue of GKE running on a Google Cloud infrastructure, the Kubernetes pods on GKE can't access any Google Cloud service at its wish and will. &lt;/p&gt;

&lt;p&gt;Every API consumer in this case a GKE application needs to be authenticated and authorised to call a Google Cloud API. Google &lt;a href="https://cloud.google.com/iam/docs/overview" rel="noopener noreferrer"&gt;IAM&lt;/a&gt; provides mechanisms to generate &lt;strong&gt;credentials&lt;/strong&gt; for consumers. For application consumers it is always recommended to use a &lt;a href="https://cloud.google.com/iam/docs/service-account-overview" rel="noopener noreferrer"&gt;SA&lt;/a&gt;, as Service Accounts are capable of being used as :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;principals&lt;/strong&gt;  -  Access to Google API can be granted directly to  a principal &lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;resource&lt;/strong&gt;  - Other principals e.g User, Group etc., can be added to a Service Account with specific role scopes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check out &lt;a href="https://cloud.google.com/iam/docs/service-account-overview#service-account-permissions" rel="noopener noreferrer"&gt;Service Account Permissions&lt;/a&gt; for more details.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Service Account Keys
&lt;/h2&gt;

&lt;p&gt;Great! We understood that every consumer needs a &lt;strong&gt;Google IAM credentials&lt;/strong&gt; to call Google Cloud Service API. &lt;/p&gt;

&lt;h3&gt;
  
  
  How do API consumers get this API Credential?
&lt;/h3&gt;

&lt;p&gt;There few ways to do that, for this blog series we will use &lt;a href="https://cloud.google.com/iam/docs/service-account-overview" rel="noopener noreferrer"&gt;Google IAM Service Account(GSA)&lt;/a&gt;. Each GSA can have one or more JSON Keys called the &lt;a href="https://cloud.google.com/iam/docs/keys-create-delete" rel="noopener noreferrer"&gt;SA key&lt;/a&gt;. The SA key file has critical information like &lt;em&gt;private_key_id&lt;/em&gt;, &lt;em&gt;private_key&lt;/em&gt;, &lt;em&gt;client_email&lt;/em&gt;, &lt;em&gt;client_id&lt;/em&gt; etc., that will be used to identify the SA with with Google Cloud IAM and make necessary service calls.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IMPORTANT:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The JSON key needs to be stored securely&lt;/li&gt;
&lt;li&gt;The &lt;a href="https://cloud.google.com/iam/docs/reference/rest/v1/Policy" rel="noopener noreferrer"&gt;Google IAM policies&lt;/a&gt; determines what authorisations are available for a associated GSA.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Using GSA JSON Key
&lt;/h3&gt;

&lt;p&gt;All Google API client libraries by default looks for an environment variable &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt;, uses it perform required authentication and authorisation. So as developer once we have access to the SA JSON key we set the path of the file as the value of &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; environment variable.&lt;/p&gt;

&lt;p&gt;On GKE these credentials are usually stored as &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noopener noreferrer"&gt;Kubernetes Secrets&lt;/a&gt; . The application pods can then mount on these &lt;a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="noopener noreferrer"&gt;secrets as files&lt;/a&gt; &lt;br&gt;
 and set the environment variable &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; with the mounted file path.&lt;/p&gt;

&lt;p&gt;Though this way of using &lt;strong&gt;static key&lt;/strong&gt; file is quick and fast it lacks security and manageability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The keys do not have an expiry and has to be manually rotated.&lt;/li&gt;
&lt;li&gt; Key rotation and compromise has cascading effect i.e if the keys are compromised, it needs to regenerated and shared with all consumers.&lt;/li&gt;
&lt;li&gt;On cloud its recommended to follow &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege" rel="noopener noreferrer"&gt;Principle of least privilege&lt;/a&gt;, which means a GSA be given very few permissions. So if your applications accesses multiple services then it might be required to generate multiple keys one for each service and it becomes harder to rotate all those static keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Is there a &lt;strong&gt;keyless&lt;/strong&gt; way to call the APIs?
&lt;/h3&gt;

&lt;p&gt;All that &lt;strong&gt;static key&lt;/strong&gt; file does is to identify the caller using the details from the JSON key file with Google Cloud platform. Once its identity is proven a i.e. knowing the associated GSA; a &lt;strong&gt;short living access token&lt;/strong&gt; is generated and shared with the consumer. The consumer can then use that token for making all authorised API calls.&lt;/p&gt;

&lt;p&gt;So if the application pods i.e. &lt;strong&gt;workload&lt;/strong&gt; can identify itself by some mechanism, then we might not need &lt;strong&gt;static key&lt;/strong&gt;. Thats exactly what &lt;strong&gt;Workload Identity&lt;/strong&gt; is used for.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Workload Identity&lt;/strong&gt; allows a Kubernetes service account in your GKE cluster to act as an Google IAM Service account. Pods that use the configured KSA automatically authenticate as the IAM service account when accessing Google Cloud APIs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When GKE is enabled with Workload Identity, the fixed workload identity pool helps Google Cloud IAM to understand the KSA and GSA associated with it i.e. KSA impersonates itself as GSA. With this impersonation the application running with that KSA can call the Google services as the GSA. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9n8mitvsidt16o7h4hs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9n8mitvsidt16o7h4hs.png" alt="Workload Identity" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the official &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity#how_works" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for more details on how Workload Identity works under the hoods.&lt;/p&gt;

&lt;p&gt;So with &lt;em&gt;Workload Identity&lt;/em&gt; we can work through the drawbacks of &lt;strong&gt;static key&lt;/strong&gt; file usage and in addition:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide fine grained access control to each application with KSA and GSA combination.&lt;/li&gt;
&lt;li&gt;The Workload Identity uses &lt;strong&gt;Short Lived Tokens&lt;/strong&gt; than &lt;em&gt;Long Living Static Keys&lt;/em&gt; thereby adding more security.&lt;/li&gt;
&lt;li&gt;The application workloads no longer need to configure the &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; and assume right keys are available for it.&lt;/li&gt;
&lt;li&gt;For Ops the ACL/RBAC can be centrally managed from Google Cloud Web Console via the Google Cloud IAM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We need Google IAM Credentials to call any Google API&lt;/li&gt;
&lt;li&gt;How to create Google Service Account Key(JSON) i.e. static key&lt;/li&gt;
&lt;li&gt;How to use Google Service Account Key to call Google API&lt;/li&gt;
&lt;li&gt;Use Google IAM Roles/Permissions is used to restrict what APIs a Google Service Account  can call&lt;/li&gt;
&lt;li&gt;How to Google Service Account keys are used by GKE application Pods&lt;/li&gt;
&lt;li&gt;Finally What is a Workload Identity and how it helps in keyless API invocations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In &lt;a href="https://dev.to/kameshsampath/applying-workload-identity-with-a-demo-1bf9"&gt;next part&lt;/a&gt; of this series let us see how to apply Workload Identity with a demo.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>cloud</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Continuously Integrate Go Applications</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Mon, 06 Mar 2023 16:05:59 +0000</pubDate>
      <link>https://dev.to/kameshsampath/continuously-integrate-go-applications-3db</link>
      <guid>https://dev.to/kameshsampath/continuously-integrate-go-applications-3db</guid>
      <description>&lt;p&gt;At the end of this tutorial you will learn,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to build Go application container image without using a &lt;em&gt;Dockerfile&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;What are &lt;a href="https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts#secrets_management" rel="noopener noreferrer"&gt;&lt;strong&gt;Secrets&lt;/strong&gt;&lt;/a&gt; and how to add them to your Project&lt;/li&gt;
&lt;li&gt;What are &lt;a href="https://docs.harness.io/article/hv2758ro4e-learn-harness-key-concepts#connectors" rel="noopener noreferrer"&gt;&lt;strong&gt;Connectors&lt;/strong&gt;&lt;/a&gt; and how to add a Docker Registry Connector to your Project&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;p&gt;Before you get started with the tutorial make sure you have the following accounts, credentials and tools,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; account, where you may need to fork the tutorial sources.&lt;/li&gt;
&lt;li&gt;A Docker Registry account e.g &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;DockerHub&lt;/a&gt;, &lt;a href="https://quay.io" rel="noopener noreferrer"&gt;Quay.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-devto-tutorial-go-pipeline&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt; &lt;strong&gt;free&lt;/strong&gt; &lt;strong&gt;tier&lt;/strong&gt; account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following tools are required to try building the sources locally for test and verification, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.drone.io/cli/install/" rel="noopener noreferrer"&gt;Drone CLI&lt;/a&gt; to build the application locally.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;As part of this tutorial we will be building a simple &lt;strong&gt;Go&lt;/strong&gt; REST API called &lt;code&gt;fruits-api&lt;/code&gt;. The application uses a RDBMS(PostgreSQL or MySQL) or NOSQL(MongoDB) to store the fruits data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tutorial Source
&lt;/h2&gt;

&lt;p&gt;The complete demo source is available here &lt;a href="https://github.com/harness-apps/go-fruits-api" rel="noopener noreferrer"&gt;https://github.com/harness-apps/go-fruits-api&lt;/a&gt;, fork the repository on to your GitHub account. For rest of the tutorial we will refer to this repository as &lt;code&gt;$TUTORIAL_GIT_REPO&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Application Locally
&lt;/h2&gt;

&lt;p&gt;Languages and package formats have build specific tools. One of the core problems that a developer might face is to install the right version of those tools on their local machines. This approach has potential pit falls and leads to &lt;strong&gt;Works only on my machine&lt;/strong&gt; scenarios.&lt;/p&gt;

&lt;p&gt;Docker containers solved this problem and helped us to have clean environment that had right set of tools, encouraging the &lt;strong&gt;DevOps&lt;/strong&gt; best practices right from the start. This approach also helps to identify the potential issues with the application at development stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://drone.io" rel="noopener noreferrer"&gt;Drone by Harness&lt;/a&gt; is an open source CI platform that can help building and testing on your local machines without the need of installing the tools as required by the programming languages.&lt;/p&gt;

&lt;p&gt;But before we start to build the application, we need place to store the artifacts of the build i.e. container images. In container/Cloud Native world this is called a &lt;strong&gt;Container Registry&lt;/strong&gt; e.g Docker Hub, Quay.io, Harbor etc.,&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Container Registry
&lt;/h2&gt;

&lt;p&gt;Like any file you want to share with the world, storing them in an external spot makes them more accessible. A big benefit of using containers as a packaging format is the ecosystem of container registries out there. Your firm might have a registry provider such as Docker Hub, Quay.io, Harbor, Google Container Registry(GCR), Elastic Container Registry(ECR) etc.,&lt;/p&gt;

&lt;p&gt;For this tutorial we will be using &lt;a href="https://hub.docker.com/" rel="noopener noreferrer"&gt;Docker Hub&lt;/a&gt;. If you do not have a registry available to you, you can create a &lt;a href="https://hub.docker.com/signup" rel="noopener noreferrer"&gt;Docker Hub account&lt;/a&gt; and then create a repository &lt;code&gt;fruits-api&lt;/code&gt;, where we will push our &lt;code&gt;fruits-api&lt;/code&gt; application container image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e9x1ye5v4y2f79gn9o7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e9x1ye5v4y2f79gn9o7.png" alt="Fruits API Docker Repository" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With us having created the &lt;code&gt;fruits-api&lt;/code&gt; repository, lets test our repository by building and pushing the image to the registry,&lt;/p&gt;

&lt;p&gt;Login to your Docker Hub Account,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HUB_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; |&lt;span class="se"&gt;\&lt;/span&gt;
  docker login &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt; &lt;span class="nt"&gt;--password-stdin&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;$DOCKER_HUB_USERNAME&lt;/code&gt; - Docker Hub username, the one you used while registering the for the Docker Hub account or the one you wish to use if you already have an account with Docker Hub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;$DOCKER_HUB_PASSWORD&lt;/code&gt; - Docker Hub user password&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let us clone the tutorial application from &lt;a href="https://github.com/harness-apps/go-fruits-api" rel="noopener noreferrer"&gt;https://github.com/harness-apps/go-fruits-api&lt;/a&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#  clone go-fruits-api repository&lt;/span&gt;
git clone https://github.com/harness-apps/go-fruits-api.git &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$_&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="c"&gt;# navigate to the clone repository folder&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TUTORIAL_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cli.github.com/" rel="noopener noreferrer"&gt;GitHub Cli&lt;/a&gt; is very handy tool to work with the GitHub repositories from the command line.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Create your fork of the tutorial repository,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gh repo fork
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Info&lt;/p&gt;

&lt;p&gt;You can also create your fork from the tutorial repository &lt;a href="https://github.com/harness-apps/go-fruits-api" rel="noopener noreferrer"&gt;https://github.com/harness-apps/go-fruits-api&lt;/a&gt; directly from GitHub.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To make things simple let use &lt;a href="https://drone.io" rel="noopener noreferrer"&gt;Drone by Harness&lt;/a&gt; to build and push the image from your laptops to the Docker Hub repository &lt;code&gt;fruits-api&lt;/code&gt;,&lt;/p&gt;

&lt;p&gt;Copy &lt;code&gt;$TUTORIAL_HOME/.env.example&lt;/code&gt; to &lt;code&gt;$TUTORIAL_HOME/.env&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nv"&gt;$TUTORIAL_HOME&lt;/span&gt;/.env.example &lt;span class="nv"&gt;$TUTORIAL_HOME&lt;/span&gt;/.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Edit the &lt;code&gt;$TUTORIAL_HOME/.env&lt;/code&gt; and update it with following,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;PLUGIN_REGISTRY&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;docker.io&lt;/span&gt;
&lt;span class="py"&gt;PLUGIN_USERNAME&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;
&lt;span class="py"&gt;PLUGIN_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;$DOCKER_HUB_PASSWORD&lt;/span&gt;
&lt;span class="py"&gt;PLUGIN_REPO&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;$DOCKER_HUB_USERNAME/fruits-api&lt;/span&gt;
&lt;span class="py"&gt;PLUGIN_TAG&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;0.0.1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;br&gt;
Replace the &lt;code&gt;$DOCKER_HUB_USERNAME&lt;/code&gt;, &lt;code&gt;DOCKER_HUB_PASSWORD&lt;/code&gt; with your docker hub username and password values.&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;drone &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;br&gt;
It will take few mins for the build and push to complete as Drone will try to pull the container images if not exists.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If all went well your command line output(trimmed for brevity) should like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
[push:350] The push refers to repository [docker.io/$DOCKER_HUB_USERNAME/fruits-api:0.0.1]
[push:351] 639e874c7280: Preparing
[push:352] 96e320b34b54: Preparing
[push:353] c306578afebb: Preparing
[push:354] 96e320b34b54: Layer already exists
[push:355] c306578afebb: Pushed
[push:356] 639e874c7280: Pushed
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the pushed image at &lt;a href="https://hub.docker.com/repository/docker/$DOCKER_HUB_USERNAME/fruits-api" rel="noopener noreferrer"&gt;https://hub.docker.com/repository/docker/$DOCKER_HUB_USERNAME/fruits-api&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;br&gt;
You can use tools like &lt;a href="https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md" rel="noopener noreferrer"&gt;crane&lt;/a&gt;, that allows you check the image and its tags from cli&lt;br&gt;
e.g. &lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt; crane &lt;span class="nb"&gt;ls &lt;/span&gt;docker.io/&lt;span class="nv"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;/fruits-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;p&gt;Simple enough locally to get your local build and packaging in. Our process to build and push the &lt;strong&gt;go&lt;/strong&gt; application looks like,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F688y3e2vy3he5v88lpom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F688y3e2vy3he5v88lpom.png" alt="Pipeline Steps" width="759" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These sequence of steps is referred to as a &lt;strong&gt;Pipeline&lt;/strong&gt; in Continuous Integration(CI) world.&lt;/p&gt;

&lt;p&gt;The drone pipeline &lt;code&gt;build and push&lt;/code&gt; step uses &lt;a href="https://ko.build/" rel="noopener noreferrer"&gt;ko-build&lt;/a&gt; which can build go container images without the need for &lt;em&gt;Dockerfile&lt;/em&gt;. It also allows you to build the multi arch/platform images with much ease.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;drone exec&lt;/code&gt; that we did earlier is OK as long you are playing/learning a technology in other words laptop use cases, when you are working on a team to deliver some enterprise application then it becomes super critical that this process be centralized and automated. &lt;a href="https://harness.io/" rel="noopener noreferrer"&gt;Harness Platform&lt;/a&gt; helps you do exactly that and much more.&lt;/p&gt;

&lt;p&gt;The next sections this tutorial helps you get started on the building your CI Pipeline using Harness platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your First Continuous Integration Pipeline
&lt;/h2&gt;

&lt;p&gt;If you took a closer look at what your machine was doing during those local builds, the machine was bogged down for a few moments. For yourself, that is fine, but imagine having to support 10’s or 100’s or even 1000’s of engineers, this process can be taxing on systems. Luckily, modern Continuous Integration Platforms are designed to scale with distributed nodes. Harness Continuous Integration is designed to scale and simplify getting your local steps externalized; this is the Continuous Integration Pipeline. Let’s enable Harness Continuous Integration to mimic your local steps and create your first CI Pipeline. Once you are done, you will have a repeatable, consistent, and distributed build process.&lt;/p&gt;

&lt;p&gt;There are a few Harness resources to create along the way, which this guide will walk through step-by-step.There are two paths to take. One path is to have Harness host all of the needed infrastructure for a distributed build. The second is to bring your own infrastructure for the distributed build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hosted Infrastructure&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqtphwnvjomo1apg0z23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqtphwnvjomo1apg0z23.png" alt="Harness CI Hosted Overview" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bring Your Own Infrastructure&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy80f4xe8268f62zkyq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy80f4xe8268f62zkyq3.png" alt="Harness CI Bring Your Own Overview" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this tutorial we will be using the &lt;strong&gt;Hosted Infrastructure&lt;/strong&gt; as thats the only infrastructure available for &lt;em&gt;Free Tier&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting off with Harness
&lt;/h3&gt;

&lt;p&gt;Harness is a Platform which has lot of modules, but for this tutorial we will focus on the Continuous Integration(CI) module. &lt;/p&gt;

&lt;p&gt;First, sign up for a &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-devto-tutorial-go-pipeline&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;Harness account to get started&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fyg564sqi8aviqhf0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3fyg564sqi8aviqhf0o.png" alt="Harness Signup" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Personal Access Token(PAT)
&lt;/h3&gt;

&lt;p&gt;Assuming you are leveraging GitHub, Harness will need access to the repository. It is recommended to use GitHub &lt;a href="https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token" rel="noopener noreferrer"&gt;Personal Access Token(PAT)&lt;/a&gt; as a mode of providing Github credentials.&lt;/p&gt;

&lt;p&gt;If you have not created a PAT before, on your GitHub account navigate to &lt;strong&gt;Settings&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Developer Settings&lt;/strong&gt; -&amp;gt; &lt;strong&gt;Personal Access Tokens&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4ta1g1tbmp6kojwclg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4ta1g1tbmp6kojwclg8.png" alt="GitHub PAT" width="800" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make sure to jot down the &lt;strong&gt;token&lt;/strong&gt; as the token will only be displayed once. For rest of the tutorial we will refer to this token value as &lt;code&gt;$GITHUB_PAT&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you plan to bring in your PAT then make sure it has the scopes &lt;code&gt;admin:repo_hook&lt;/code&gt; and &lt;code&gt;user&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Project
&lt;/h3&gt;

&lt;p&gt;Harness Platform organizes the resources like pipelines, secrets, connectors at various scopes such as Account, Organization and Project. For this tutorial we will create all our resources at Project scope.&lt;/p&gt;

&lt;p&gt;Login to your Harness Account that you created earlier and create a new project,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf8un21mqo9r9h8ecu49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf8un21mqo9r9h8ecu49.png" alt="New Project" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the new project page, click &lt;strong&gt;Create Project&lt;/strong&gt; to create a new project named &lt;em&gt;Fruits API&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77rs6ujw7ex2kd1w69ys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F77rs6ujw7ex2kd1w69ys.png" alt="Create Fruits API Project" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave other options to defaults and click &lt;strong&gt;Save and Continue&lt;/strong&gt;. On the modules select &lt;em&gt;Continuous Integration&lt;/em&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7idbv13fi9kg44pshs3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7idbv13fi9kg44pshs3y.png" alt="Module CI" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you are ready to wire in the pieces to Harness Continuous Integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Your First Pipeline
&lt;/h2&gt;

&lt;p&gt;In the Build Module &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-devto-tutorial-go-pipeline&amp;amp;utm_term=tutorial" rel="noopener noreferrer"&gt;Harness Continuous Integration&lt;/a&gt;, walking through the wizard is the fastest path to get your build running. Click Get Started. This will create a basic Pipeline for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7g92lnavf3iqy3a1uw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7g92lnavf3iqy3a1uw5.png" alt="Get Started" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Get Started&lt;/strong&gt;, select GitHub as the repository to use, and enter your GitHub Access Token &lt;code&gt;$GITHUB_PAT&lt;/code&gt; and finally click &lt;strong&gt;Test Connection&lt;/strong&gt; to verify your credentials work,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe19t93u47fh36ww2yps3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe19t93u47fh36ww2yps3.png" alt="SCM Choice" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt;, click &lt;strong&gt;Select Repository&lt;/strong&gt; to select the Git Hub Repository that you want to build [the sample is called &lt;em&gt;go-fruits-api&lt;/em&gt;].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2j3nbdj1o929w5uqq9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2j3nbdj1o929w5uqq9r.png" alt="Go Docker Repo" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;br&gt;
Please ensure the repository you select here is your fork of &lt;a href="https://github.com/harness-apps/go-fruits-api" rel="noopener noreferrer"&gt;https://github.com/harness-apps/go-fruits-api&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Can leverage one of the Starter Configs or create a Starter Pipeline. In this case if leveraging the example app which is Go based, leveraging the &lt;strong&gt;Go&lt;/strong&gt; Starter Configuration works fine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ms9qmb2xdy6e0qmynej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ms9qmb2xdy6e0qmynej.png" alt="Configure Go" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Pipeline&lt;/strong&gt; to start adding the pipeline steps.&lt;/p&gt;

&lt;p&gt;There are two ways to add your pipeline steps, &lt;em&gt;visual&lt;/em&gt; or &lt;em&gt;YAML&lt;/em&gt;. For rest of the tutorial we will use the &lt;em&gt;visual&lt;/em&gt; editor.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu21mmgahoiu4ksxxdp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu21mmgahoiu4ksxxdp8.png" alt="Pipeline Visual" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scaffolding would have added a single step called &lt;em&gt;Build Go App&lt;/em&gt;. In the upcoming sections we will add the other steps like &lt;em&gt;&lt;strong&gt;lint&lt;/strong&gt;&lt;/em&gt;, &lt;em&gt;&lt;strong&gt;test&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;push&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Before we get to adding other steps, we need some resources that the steps require namely secrets and connectors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Docker Hub Password Secret
&lt;/h3&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Project Setup&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Secrets&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9evu0sspnxde0nmnqgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft9evu0sspnxde0nmnqgb.png" alt="Project Secrets" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;+ New Secret&lt;/strong&gt; and select &lt;strong&gt;Text&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6aqqddjny5wgwwx3qb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6aqqddjny5wgwwx3qb3.png" alt="New Text Secret" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fill your Docker Hub password on the  &lt;strong&gt;Add new Encrypted Text&lt;/strong&gt; window,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06fr5e7x0vv5mioxwjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe06fr5e7x0vv5mioxwjc.png" alt="Docker Hub Password" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Docker Hub Registry Connector
&lt;/h3&gt;

&lt;p&gt;Next let we need to add &lt;strong&gt;Connector&lt;/strong&gt; that allows us to connect and later push the image to our Docker Hub repository.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;strong&gt;Project Setup&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Connectors&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2pqyfzykug2oal0ne4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2pqyfzykug2oal0ne4g.png" alt="Project Connectors" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;+ New Connector&lt;/strong&gt; and select &lt;strong&gt;Docker registry&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qwxypjtl8fk81f7sd7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qwxypjtl8fk81f7sd7j.png" alt="Docker Registry Connector" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the new connector wizard &lt;strong&gt;Overview&lt;/strong&gt; screen, enter the name of the connector as &lt;code&gt;docker hub&lt;/code&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft20ukpdkrgush36bxfst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft20ukpdkrgush36bxfst.png" alt="Docker Connector Overview" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt; to configure the credentials,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmb19crwxxpv2apoz2lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmb19crwxxpv2apoz2lg.png" alt="Docker Connector Credentials" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the &lt;strong&gt;Username&lt;/strong&gt; with your &lt;code&gt;$DOCKER_HUB_USERNAME&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;For the &lt;strong&gt;Password&lt;/strong&gt; field click &lt;em&gt;Create or Select a Secret&lt;/em&gt; to select the secret &lt;em&gt;&lt;strong&gt;docker hub password&lt;/strong&gt;&lt;/em&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Continue&lt;/strong&gt;  and use the &lt;em&gt;Harness Platform&lt;/em&gt; as the connectivity mode option,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcixr25mecvhkqhnt4cmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcixr25mecvhkqhnt4cmj.png" alt="Docker Connector Connectivity Mode" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save and Continue&lt;/strong&gt; to perform the connectivity test,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8znec5tghe8a3ioe1yht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8znec5tghe8a3ioe1yht.png" alt="Docker Connector Success" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Finish&lt;/strong&gt; to complete the creation of Connector resource.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84aw1fzja4oyjajh6txy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84aw1fzja4oyjajh6txy.png" alt="Connectors List" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you are all set to add other steps to the &lt;strong&gt;Build Go&lt;/strong&gt; pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update Pipeline
&lt;/h3&gt;

&lt;p&gt;Navigate to the &lt;strong&gt;Projects&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Pipelines&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnfh0ctboszwpifaz0jr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnfh0ctboszwpifaz0jr.png" alt="Pipelines List" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Build Go&lt;/strong&gt; pipeline,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab7xkdr6kkgeiyfb0q67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fab7xkdr6kkgeiyfb0q67.png" alt="Build Go Pipeline" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Delete the existing &lt;strong&gt;Build Go App&lt;/strong&gt; step by clicking the &lt;code&gt;x&lt;/code&gt; that appears when you hover over the step.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add Step&lt;/strong&gt; to add a new step called &lt;strong&gt;lint&lt;/strong&gt;, from the &lt;em&gt;Step Library&lt;/em&gt; choose step type as &lt;strong&gt;Run&lt;/strong&gt; and configure the step with details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lint
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Lint the go application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the &lt;strong&gt;Shell&lt;/strong&gt; to be &lt;code&gt;Bash&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;golangci-lint run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx6sy7ty5m7x0ip2lk3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbx6sy7ty5m7x0ip2lk3p.png" alt="Lint Step" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step and click &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/p&gt;

&lt;p&gt;As did earlier click &lt;strong&gt;Add Step&lt;/strong&gt; to add a new step called &lt;strong&gt;test&lt;/strong&gt;, from the &lt;em&gt;Step Library&lt;/em&gt; choose step type as &lt;strong&gt;Run&lt;/strong&gt; and configure the step with details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Test the go application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the &lt;strong&gt;Shell&lt;/strong&gt; to be &lt;code&gt;Bash&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-timeout&lt;/span&gt; 30s &lt;span class="nt"&gt;-v&lt;/span&gt; ./... 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna8r4g9rjxrj6ai921ud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna8r4g9rjxrj6ai921ud.png" alt="Test Step" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While building the application locally we used &lt;em&gt;SQLite&lt;/em&gt; as our database. The go application can also run with PostgreSQL or MySQL or Mongodb. For this tutorial we will be using &lt;em&gt;MySQL&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;test&lt;/strong&gt; step to connect to the &lt;strong&gt;mysql&lt;/strong&gt; service add the following environment variables to the step configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;FRUITS_DB_TYPE: mysql
MYSQL_HOST: &lt;span class="s2"&gt;"mysql"&lt;/span&gt;
MYSQL_PORT: 3306
MYSQL_ROOT_PASSWORD: superS3cret!
MYSQL_PASSWORD: pa55Word!
MYSQL_USER: demo
MYSQL_DATABASE: demodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;em&gt;environment&lt;/em&gt; variables could be added by clicking &lt;strong&gt;+ Add&lt;/strong&gt; under &lt;strong&gt;Environment Variables&lt;/strong&gt; section of the step configuration,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s6i1gx08kcnbfdjcp5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0s6i1gx08kcnbfdjcp5e.png" alt="Test environment Variables" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;br&gt;
You can awake step configuration screen by clicking the step on the visual editor.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How can the &lt;em&gt;test&lt;/em&gt; step connect to &lt;em&gt;MySQL&lt;/em&gt; database ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Harness Pipelines support a concept called as &lt;strong&gt;Service Dependency&lt;/strong&gt;, it is a detached service that's accessible to all Steps in a Stage. Service dependencies support workflows such as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integration testing: You can set up a service and then run tests against this service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Running Docker-in-Docker: You can set up a &lt;a href="https://ngdocs.harness.io/article/ajehk588p4" rel="noopener noreferrer"&gt;dind service&lt;/a&gt; to process Docker commands in Run Steps.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our tutorial we will use the &lt;em&gt;Integration testing&lt;/em&gt; workflow to make the &lt;strong&gt;test&lt;/strong&gt; step to connect to &lt;em&gt;MySQL&lt;/em&gt; and run the integration test cases against it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add the MySQL Service Dependency
&lt;/h3&gt;

&lt;p&gt;On the Pipeline editor click &lt;strong&gt;Add Service Dependency&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19ez81o4oulxn1gzwjxx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19ez81o4oulxn1gzwjxx.png" alt="Add Service Dependency" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Configure the MySQL Dependency Service with details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mysql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;the mysql or mariadb server that will be used for testing.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Select the &lt;strong&gt;Container Registry&lt;/strong&gt; to be &lt;code&gt;docker hub&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mariadb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service dependency need to be configured with the same environment variables that we added to &lt;strong&gt;test&lt;/strong&gt; step.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;MYSQL_PORT: 3306
MYSQL_ROOT_PASSWORD: superS3cret!
MYSQL_PASSWORD: pa55Word!
MYSQL_USER: demo
MYSQL_DATABASE: demodb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfsjkp67tqd3yblbicrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftfsjkp67tqd3yblbicrl.png" alt="Configure MySQL Dependency" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step and then click &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lint and Test the Application
&lt;/h3&gt;

&lt;p&gt;Let us verify if were able to &lt;em&gt;&lt;strong&gt;lint&lt;/strong&gt;&lt;/em&gt; and &lt;em&gt;&lt;strong&gt;test&lt;/strong&gt;&lt;/em&gt; our go application.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Run&lt;/strong&gt; from the pipeline editor page,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y0d9klm4026z8ytnav6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3y0d9klm4026z8ytnav6.png" alt="Run Pipeline" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leaving everything to defaults namely &lt;strong&gt;Git Branch&lt;/strong&gt; and &lt;strong&gt;Branch Name&lt;/strong&gt; to be &lt;em&gt;main&lt;/em&gt;, click &lt;strong&gt;Run Pipeline&lt;/strong&gt; to start the pipeline run. If all ran well you should see a successful pipeline run as shown,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnbqzratis4o7429mk7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnbqzratis4o7429mk7k.png" alt="Lint and Test Success" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;&lt;br&gt;
You can click on each step to view the logs of the respective step&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Having tasted the success with our pipeline run, let us add the other step of building and pushing the go application to the container registry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and Push Image to Container Registry
&lt;/h3&gt;

&lt;p&gt;As did earlier navigate to the &lt;strong&gt;Projects&lt;/strong&gt; --&amp;gt; &lt;strong&gt;Pipelines&lt;/strong&gt;,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxht87shjqp421u0u5rs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxht87shjqp421u0u5rs6.png" alt="Pipelines List" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And click &lt;strong&gt;Build Go&lt;/strong&gt; pipeline to open the pipeline editor,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff744ccm30xfu88du5dmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff744ccm30xfu88du5dmv.png" alt="Build Go Pipeline" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Add Step&lt;/strong&gt; to add a new step called &lt;strong&gt;build and push&lt;/strong&gt;, from the &lt;em&gt;Step Library&lt;/em&gt; choose step type as &lt;strong&gt;Run&lt;/strong&gt; and configure the step with details,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Name&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build and push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build go application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Choose &lt;strong&gt;Bash&lt;/strong&gt; to be the &lt;strong&gt;Shell&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Command&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HUB_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | ko auth login docker.io &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--password-stdin&lt;/span&gt;
ko build &lt;span class="nt"&gt;--bare&lt;/span&gt; &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/amd64 &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64 &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbanwzyspl6pidacy6f3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbanwzyspl6pidacy6f3.png" alt="Build and Push Step" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also need to configure few environment variables that are required by &lt;code&gt;ko&lt;/code&gt; to build and push the image to &lt;code&gt;fruits-api&lt;/code&gt; container repository.&lt;/p&gt;

&lt;p&gt;Update the &lt;strong&gt;Environment Variables&lt;/strong&gt; section with following values,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;DOCKER_HUB_USERNAME: &lt;span class="nv"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;
DOCKER_HUB_PASSWORD: &amp;lt;+secrets.getValue&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"docker_hub_password"&lt;/span&gt;&lt;span class="o"&gt;)&amp;gt;&lt;/span&gt;
KO_DOCKER_REPO: docker.io/&lt;span class="nv"&gt;$DOCKER_HUB_USERNAME&lt;/span&gt;/fruits-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbyk77hrnbra3b0zw7je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpbyk77hrnbra3b0zw7je.png" alt="Build and Push Env" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Info&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As marked ensure the &lt;code&gt;DOCKER_HUB_PASSWORD&lt;/code&gt; is of type &lt;strong&gt;Expression&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;secrets.getValue&lt;/code&gt; is an expression that allows to get the value from the secret &lt;code&gt;docker_hub_password&lt;/code&gt;, that was created earlier in the tutorial. Check the &lt;a href="https://developer.harness.io/docs/platform/security/add-use-text-secrets/#step-3-reference-the-encrypted-text-by-identifier" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for more info&lt;/li&gt;
&lt;li&gt;All &lt;code&gt;$DOCKER_HUB_USERNAME&lt;/code&gt; references should your Docker Hub Username&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;Click &lt;strong&gt;Apply Changes&lt;/strong&gt; to save the step and click &lt;strong&gt;Save&lt;/strong&gt; to save the pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm43t0jv21wjeq3hlh61z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm43t0jv21wjeq3hlh61z.png" alt="Final Pipeline" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With those changes saved, you are ready to lint, test, build and push your &lt;strong&gt;go&lt;/strong&gt; application to container registry(DockerHub).&lt;/p&gt;

&lt;h2&gt;
  
  
  Run CI Pipeline
&lt;/h2&gt;

&lt;p&gt;As did earlier click &lt;strong&gt;Run&lt;/strong&gt; from the pipeline editor window,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1yygw5r0iqqywq00yp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1yygw5r0iqqywq00yp5.png" alt="Run Pipeline" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leaving everything to defaults namely &lt;strong&gt;Git Branch&lt;/strong&gt; and &lt;strong&gt;Branch Name&lt;/strong&gt; to be &lt;em&gt;main&lt;/em&gt;, click &lt;strong&gt;Run Pipeline&lt;/strong&gt; to start the pipeline run.&lt;/p&gt;

&lt;p&gt;Now you are ready to execute. Click "Run Pipeline".&lt;/p&gt;

&lt;p&gt;Once a successful run, head back to Docker Hub, and tag &lt;code&gt;latest&lt;/code&gt; is there!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9jlz0adf372plqzu6hr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9jlz0adf372plqzu6hr.png" alt="Success" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is just the start of your Continuous Integration journey. It might seem like multiple steps to get your local build in the platform, but it unlocks the world of possibilities.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Exercise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/harness-apps/go-fruits-api" rel="noopener noreferrer"&gt;https://github.com/harness-apps/go-fruits-api&lt;/a&gt; has another branch &lt;strong&gt;mongodb&lt;/strong&gt;. Adapt your pipeline so that it build and test the code from &lt;strong&gt;mongodb&lt;/strong&gt; branch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Continuing on Your Continuous Integration Journey
&lt;/h2&gt;

&lt;p&gt;You can now execute your builds whenever you want in a consistent fashion. Can modify the trigger to watch for SCM events so upon commit, for example, the Pipeline gets kicked off automatically. All of the objects you create are available for you to re-use. Lastly, you can even save your backing work / have it as part of your source code. Everything that you do in Harness is represented by YAML; feel free to store it as part of your project.&lt;/p&gt;

&lt;p&gt;After you have built your artifact, the next step is to deploy your artifact. This is where Continuous Delivery steps in and make sure to check out some other &lt;a href="https://developer.harness.io/tutorials/deploy-services" rel="noopener noreferrer"&gt;CD Tutorials&lt;/a&gt;.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Simplify Your Dockerfile</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Mon, 20 Feb 2023 09:19:23 +0000</pubDate>
      <link>https://dev.to/kameshsampath/simplify-your-dockerfile-1j5k</link>
      <guid>https://dev.to/kameshsampath/simplify-your-dockerfile-1j5k</guid>
      <description>&lt;p&gt;With &lt;a href="https://www.rust-lang.org/" rel="noopener noreferrer"&gt;rustlang&lt;/a&gt; gaining lots of popularity, I thought to give it a try. As cloud native application developer, the first thing I thought was to build was simple REST API that greets the user by name.&lt;/p&gt;

&lt;p&gt;After building the application locally, the next immediate was to containerise it. Though we have an official &lt;a href="https://hub.docker.com/_/rust/" rel="noopener noreferrer"&gt;rustlang&lt;/a&gt; image, I want to build a customised image that will allow me to build cross platform container images namely &lt;code&gt;linux/arm64&lt;/code&gt; and &lt;code&gt;linux/amd64&lt;/code&gt;. I don't want to dwell into those details as that demands its own blog post ;).&lt;/p&gt;

&lt;p&gt;So my &lt;code&gt;Dockerfile&lt;/code&gt; with all rust specific tools and dependencies looks like:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; has two stages &lt;strong&gt;bins&lt;/strong&gt; and &lt;strong&gt;final&lt;/strong&gt;. &lt;strong&gt;bins&lt;/strong&gt; is used to do all binary builds that are required by  &lt;strong&gt;final&lt;/strong&gt; stage. The &lt;strong&gt;final&lt;/strong&gt; stage builds the final image that can be used with to build multi-arch container images out of our rust applications.&lt;/p&gt;

&lt;p&gt;Though it is not a complicated &lt;code&gt;Dockerfile&lt;/code&gt;, if you notice the last &lt;code&gt;RUN&lt;/code&gt; instruction of the &lt;strong&gt;final&lt;/strong&gt; stage, it is complex and hard to &lt;em&gt;debug&lt;/em&gt; typically when one of the commands fails to run. It is also hard read and understand the &lt;code&gt;RUN&lt;/code&gt; instruction. Doing multiple &lt;code&gt;RUN&lt;/code&gt; instructions to split the commands, is not recommended as it creates new layers and your your final image will be bloated in size.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: rustlang builder images are usually bigger ~ 700mb(compressed) by virute of dependencies that it needs e.g gcc, cross compilation linkers etc.,&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I was then thinking of ways to simplify this Dockerfile though not from size point of view but atleast making it simple to read and understand.  &lt;/p&gt;

&lt;p&gt;The target was to to have one &lt;code&gt;RUN&lt;/code&gt; instruction but to split the commands into individual steps without compromising on the size.&lt;/p&gt;

&lt;p&gt;I then stumbled upon &lt;a href="https://taskfile.dev/" rel="noopener noreferrer"&gt;Taskfile&lt;/a&gt; -- Task is a task runner / build tool -- which is similar to &lt;code&gt;[GNU Make](https://www.gnu.org/software/make/)&lt;/code&gt; but way simpler. Taskfile helped me to make the &lt;code&gt;Dockerfile&lt;/code&gt; simpler.&lt;/p&gt;

&lt;p&gt;I kind of moved the whole set &lt;code&gt;RUN&lt;/code&gt; instruction into a &lt;code&gt;Taskfile&lt;/code&gt;: &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Though it is verbose, but helps in understanding commands we are running as part of the Docker build. With descriptions, comments, conditions it becomes more powerful and self documented in explaining what is being executed and when it will be executed.&lt;/p&gt;

&lt;p&gt;Updating the &lt;code&gt;Dockerfile&lt;/code&gt; file results in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;#syntax=docker/dockerfile:1.3-labs&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;--platform=$TARGETPLATFORM rust:1.67-alpine3.17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;bins&lt;/span&gt;

&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; TARGETPLATFORM&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nt"&gt;--mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/cargo/registry &lt;span class="se"&gt;\
&lt;/span&gt;  apk add &lt;span class="nt"&gt;-U&lt;/span&gt; &lt;span class="nt"&gt;--no-cache&lt;/span&gt; alpine-sdk gcompat go-task &lt;span class="se"&gt;\
&lt;/span&gt;  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; cargo &lt;span class="nb"&gt;install &lt;/span&gt;cargo-zigbuild

&lt;span class="c"&gt;## The core builder that can be used to build rust applications&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;--platform=$TARGETPLATFORM alpine:3.17&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;as&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;final&lt;/span&gt;

&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; TARGETPLATFORM&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; rust_version=1.67.1&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; rustup_version=1.25.2&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; user_id=1001&lt;/span&gt;
&lt;span class="k"&gt;ARG&lt;/span&gt;&lt;span class="s"&gt; user=builder&lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; USER_ID=$user_id \&lt;/span&gt;
  USER=$user \
  RUST_VERSION=$rust_version \
  RUSTUP_VERSION=$rustup_version \
  RUSTUP_HOME=/usr/local/rustup \
  CARGO_HOME=/usr/local/cargo \
  PATH=/usr/local/cargo/bin:$PATH \
  RUST_VERSION=1.67.1

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=bins /usr/bin/go-task /usr/local/bin/task&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=bins /usr/local/cargo/bin/cargo-zigbuild /usr/local/cargo/bin/&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; tasks/Taskfile.root.yaml ./Taskfile.yaml&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;task
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we moved all our commands to &lt;code&gt;Taskfile&lt;/code&gt; the &lt;code&gt;RUN&lt;/code&gt; instruction now has to just run the &lt;code&gt;task&lt;/code&gt; command, which will then run the &lt;code&gt;default&lt;/code&gt; from the &lt;code&gt;Taskfile&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To summarise we,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wrote a multi stage &lt;code&gt;Dockerfile&lt;/code&gt; to build multi arch rust app container&lt;/li&gt;
&lt;li&gt;Moved all the instructions from &lt;code&gt;RUN&lt;/code&gt; to &lt;code&gt;Taskfile&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Used the &lt;code&gt;task&lt;/code&gt; command in &lt;code&gt;RUN&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;Moving commands to Taskfile allows us to run/test the tasks separately before using them in &lt;code&gt;Dockerfile&lt;/code&gt;. For more usage check the TaskFile &lt;a href="https://taskfile.dev/usage/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a end to end example refer to &lt;a href="https://github.com/kameshsampath/rust-greeter" rel="noopener noreferrer"&gt;rust-greeter&lt;/a&gt; that uses &lt;a href="https://github.com/kameshsampath/rust-zig-builder" rel="noopener noreferrer"&gt;rust-zig-builder&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>community</category>
      <category>vibecoding</category>
      <category>devto</category>
    </item>
    <item>
      <title>Build and sign application containers</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Thu, 19 Jan 2023 07:49:42 +0000</pubDate>
      <link>https://dev.to/kameshsampath/build-and-sign-application-containers-57l8</link>
      <guid>https://dev.to/kameshsampath/build-and-sign-application-containers-57l8</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Open Source software has been heart of every software development model. With its increased usage means the software the are built are susceptible to threats and vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Supply Chain problem ?
&lt;/h2&gt;

&lt;p&gt;Supply Chain problem is how and where vulnerabilities are introduced into the software supply chain. It is usually introduced at &lt;strong&gt;Source&lt;/strong&gt; or &lt;strong&gt;Dependencies&lt;/strong&gt; level and gets seeped into the software artifact consumed by the end user a.k.a the &lt;strong&gt;Consumer&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://slsa.dev/spec/v0.1/threats" rel="noopener noreferrer"&gt;threats&lt;/a&gt; could be introduced at two levels which forms the basis of software integrity,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Source Integrity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compromised Source Repo&lt;/li&gt;
&lt;li&gt;Unauthorised Source Code change&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Build Integrity&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build from modified source&lt;/li&gt;
&lt;li&gt;Compromise build process&lt;/li&gt;
&lt;li&gt;Use compromised dependency&lt;/li&gt;
&lt;li&gt;Upload modified package&lt;/li&gt;
&lt;li&gt;Compromise package repo&lt;/li&gt;
&lt;li&gt;Use compromised package&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  SLSA
&lt;/h2&gt;

&lt;p&gt;The &lt;strong&gt;S&lt;/strong&gt;upply chain &lt;strong&gt;L&lt;/strong&gt;evels &lt;strong&gt;S&lt;/strong&gt;oftware &lt;strong&gt;A&lt;/strong&gt;rtifact(&lt;a href="https://slsa.dev/" rel="noopener noreferrer"&gt;SLSA&lt;/a&gt;) puts a security framework in place that each software build can follow, ensuring the integrity of the built artifact. &lt;/p&gt;

&lt;p&gt;There are four &lt;a href="https://slsa.dev/spec/v0.1/levels" rel="noopener noreferrer"&gt;Levels&lt;/a&gt; of maturity in SLSA,&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Documentation of the build process&lt;/td&gt;
&lt;td&gt;Continuous Integration(CI)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Tamper resistance of the build service&lt;/td&gt;
&lt;td&gt;Hosted source/build&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Extra resistance to specific threats&lt;/td&gt;
&lt;td&gt;Security controls on host&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Highest levels of confidence and trust&lt;/td&gt;
&lt;td&gt;Two-party review + hermetic builds&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As with any process, maturing with SLSA levels is a continuous improvement process. As part of the blog and a simple tutorial using &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-dh-tutorial-cosign&amp;amp;utm_term=get-started" rel="noopener noreferrer"&gt;Harness Platform&lt;/a&gt;, which will allow us to document our build process(SLSA &lt;code&gt;Level 1&lt;/code&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;IMPORTANT: All the levels requires us to have build &lt;a href="https://slsa.dev/provenance/v0.2" rel="noopener noreferrer"&gt;Provenance&lt;/a&gt;, since it deserves its own blog post let us revisit it as part of another blog post. If you want to learn about provenance please do visit this great &lt;a href="https://dlorenc.medium.com/policy-and-attestations-89650fd6f4fa" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;With containers being the heart of Cloud Native application development, it has become even more critical to ensure the integrity of the containers. One of the ways to do this to sign and verify the container images.&lt;a href="https://sigstore.dev" rel="noopener noreferrer"&gt;sigstore&lt;/a&gt; is a open source project that empowers software developers to securely sign the container images.&lt;/p&gt;

&lt;p&gt;As part of this &lt;a href="https://developer.harness.io/tutorials/build-code/ci-tutorial-container-signing" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; we will,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand how to build container image, sign/verify the image using sigstore &lt;code&gt;cosign&lt;/code&gt; utility &lt;/li&gt;
&lt;li&gt;Integrate &lt;code&gt;cosign&lt;/code&gt; as part of Continuous Integration(CI) using &lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-dh-tutorial-cosign&amp;amp;utm_term=get-started" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Signing alone is not sufficient to ensure the overall security of any software, adopting SLSA and continuous improvement of the build process(SLSA levels) is very critical. By using Harness Platform we documented our build process and also implicitly started to move towards SLSA &lt;code&gt;Level 2&lt;/code&gt; by using a Host source (&lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;) and build(&lt;a href="https://app.harness.io/auth/#/signup/?module=ci?utm_source=internal&amp;amp;utm_medium=social&amp;amp;utm_campaign=community&amp;amp;utm_content=kamesh-dh-tutorial-cosign&amp;amp;utm_term=get-started" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt;).&lt;/p&gt;

</description>
      <category>shell</category>
      <category>programming</category>
    </item>
    <item>
      <title>Simplify Golang Multi Architecture Container Builds</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Wed, 26 Oct 2022 01:47:10 +0000</pubDate>
      <link>https://dev.to/kameshsampath/simplify-golang-multi-architecture-container-builds-2c6i</link>
      <guid>https://dev.to/kameshsampath/simplify-golang-multi-architecture-container-builds-2c6i</guid>
      <description>&lt;p&gt;With arm64 based laptops getting very popular it has become a need for the container developers to build a multi architecture images e.g. build amd64 on arm64 machine. &lt;/p&gt;

&lt;p&gt;In my case I use Apple M1, as part of my day job I build,test  and deploy container applications; by default my macbook produces &lt;em&gt;linux/arm64&lt;/em&gt; images. If I need to deploy the same image to any cloud services e.g. Google Cloud Run or Google Kubernetes Engine , then I need to have the &lt;em&gt;linux/amd64&lt;/em&gt; image as well. &lt;/p&gt;

&lt;p&gt;For such use cases the we usually write multiple Dockerfile for each architecture, a manifest file etc., there nothing wrong with that approach but then it will soon become too hard to maintain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/IHnROpQICe4kE/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/IHnROpQICe4kE/giphy.gif" width="347" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So what we need is a tool chain that can,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Have same process to build image locally as well as for cloud usage. I found &lt;a href="https://goreleaser.com" rel="noopener noreferrer"&gt;GoReleaser&lt;/a&gt; to be apt for this requirement.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is declarative, typically mapping to standard steps that will be part of building a golang application i.e. test, build and push image to container registry. &lt;a href="https://drone.io" rel="noopener noreferrer"&gt;Drone&lt;/a&gt; help to setup a build pipeline in declarative way and has plugins for all major tools/platforms including one for GoReleaser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build multi architecture images. &lt;a href="https://docs.docker.com/build/" rel="noopener noreferrer"&gt;Buildx&lt;/a&gt; helps us to build multi architecture image using the standard docker build semantics. Adding to our tools there is a Drone plugin Drone Buildx Plugin that allow us to just plug it into our pipeline.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The end to end demo of this blog post is available on my &lt;a href="https://github.com/kameshsampath/go-hello-world.git" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Clone it one to your laptop for quick reference,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kameshsampath/go-hello-world.git &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$_&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEMO_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PWD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For the rest of the post we will refer to this cloned folder as &lt;code&gt;$DEMO_HOME&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;For this demo we will be using a short lived container registry called &lt;a href="https://ttl.sh" rel="noopener noreferrer"&gt;ttl.sh&lt;/a&gt; for pushing and pulling the demo app image. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and &lt;a href="https://docs.drone.io/cli/install/" rel="noopener noreferrer"&gt;Drone CLI&lt;/a&gt; to your path.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: You are welcome to use your own registry. Please check the &lt;a href="https://drone-plugin-index.geekdocs.de/plugins/drone-docker-buildx/" rel="noopener noreferrer"&gt;Drone Buildx Plugin&lt;/a&gt; documentation on how to configure the extra parameters like &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;password&lt;/code&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let us setup some environment variables that we will be using as part of the demo&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# a unique uid as image identifier, it needs to be in the lowercase&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;IMAGE_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;uuidgen | &lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="s1"&gt;'[:upper:]'&lt;/span&gt; &lt;span class="s1"&gt;'[:lower:]'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# short lived image for 10 mins&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;IMAGE_TAG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And for convenience we will save these values on to &lt;code&gt;.env&lt;/code&gt; file which we will use to pass environment variables to Drone Pipelines and Docker runs.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;envsubst &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/.env.example"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEMO_HOME&lt;/span&gt;&lt;span class="s2"&gt;/.env"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: The example above uses &lt;a href="https://medium.com/r?url=https%3A%2F%2Fwww.man7.org%2Flinux%2Fman-pages%2Fman1%2Fenvsubst.1.html" rel="noopener noreferrer"&gt;envsubst&lt;/a&gt; to update the file. If you don't have envsubst installed you can manually update the &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let us examine the Drone pipeline &lt;code&gt;.drone.yml&lt;/code&gt; that we have,&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;It is very simple Drone Pipeline  that has three steps ,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;test&lt;/strong&gt; that runs the golang tests&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;build&lt;/strong&gt; that uses the GoReleaser to build the golang application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;push&lt;/strong&gt; that pushes the application to the container registry that we had configured earlier. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The push steps also specifies the platforms &lt;code&gt;linux/arm64&lt;/code&gt; and &lt;code&gt;linux/amd64&lt;/code&gt; that instructs the &lt;em&gt;docker buildx&lt;/em&gt; build to perform multi architecture build that will produce a container image that is compatible with the respective platforms.&lt;/p&gt;

&lt;p&gt;The push step uses the Dockerfile above to perform the build. &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;There are few important things to note,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The comment &lt;code&gt;# syntax=docker/dockerfile:1.4&lt;/code&gt; instructs the docker build to use buildx Dockerfile semantics from docker/dockerfile:1.4 . Though it does not have big significance in our case but we ensure the we use buildx to perform the builds. Buildx is default from Docker v18.x and above.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;ARG TARGETARCH&lt;/code&gt; allows the docker build to know the architecture thats is being build for e.g. amd64 or arm64 etc.,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The base image &lt;code&gt;gcr.io/distroless/base&lt;/code&gt; allows to build a tiny image with just the binary of our application in it :).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's pretty much need to build our multi architecture golang application, run the following drone command to start that pipeline that will build push the image to the container registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;drone &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;--trusted&lt;/span&gt; &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once the build is done run the following command to start the built container locally,&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Doing a simple &lt;code&gt;curl&lt;/code&gt; to &lt;a href="http://localhost:8080" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt; that should say "Hello World".&lt;/p&gt;

&lt;p&gt;Lastly, a small caveat when using GoReleaser. GoReleaser, when cross compiling the code, creates distribution folders like linux_arm64 and &lt;code&gt;linux_amd64_v1&lt;/code&gt; under the &lt;strong&gt;dist&lt;/strong&gt; folder, where the built binaries for the platforms  (&lt;code&gt;linux/arm64&lt;/code&gt; and &lt;code&gt;linux/amd64&lt;/code&gt;) are saved.&lt;/p&gt;

&lt;p&gt;The folder names follows a pattern like &lt;code&gt;&amp;lt;platform&amp;gt;_&amp;lt;arch&amp;gt;_&amp;lt;version&amp;gt;&lt;/code&gt; with &lt;code&gt;version&lt;/code&gt; being optional. For &lt;code&gt;amd64&lt;/code&gt; GoReleaser creates a binary folder like &lt;code&gt;linux_amd64_v1&lt;/code&gt; that made it difficult to pick the binary by using the arg &lt;code&gt;TARGETARCH&lt;/code&gt; from within the Dockerfile; as &lt;code&gt;TARGETARCH&lt;/code&gt; just returns only the arch name without version. In our case just &lt;code&gt;amd64&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As a workaround we need to write GoReleaser build post hook script as shown below that will rename the folder &lt;code&gt;linux_amd64_v1&lt;/code&gt; to &lt;code&gt;linux_amd64&lt;/code&gt; which then allows us to pick the right binary based on the architecture using the &lt;code&gt;TARGETARCH&lt;/code&gt;,&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;COPY server_linux_&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TARGETARCH&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/server /bin/server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; Special thanks to my son &lt;a href="https://twitter.com/rithulkamesh" rel="noopener noreferrer"&gt;Rithul Kamesh&lt;/a&gt; who volunteered to write that little script for me 😊.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To summarize what we did,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;a href="https://docs.docker.com/build/" rel="noopener noreferrer"&gt;Buildx&lt;/a&gt; to build multi architecture container images&lt;/li&gt;
&lt;li&gt;Leveraged &lt;a href="https://drone.io" rel="noopener noreferrer"&gt;Drone&lt;/a&gt; Pipelines to define a declarative builds &lt;/li&gt;
&lt;li&gt;Used &lt;a href="https://goreleaser.com" rel="noopener noreferrer"&gt;GoReleaser&lt;/a&gt; to have standard configuration to build golang applications &lt;/li&gt;
&lt;li&gt;Finally leveraged GoReleaser, Buildx via Drone plugins to perform reproducible multi architecture builds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I have also written &lt;a href="https://hub.docker.com/extensions/drone/drone-ci-docker-extension" rel="noopener noreferrer"&gt;Drone CI Docker Extension&lt;/a&gt; that allows you run this demo or any other Drone pipelines right from your Docker for Desktop. If you have minute please do try it out and let me know your valuable feedback/improvements.&lt;/p&gt;

</description>
      <category>go</category>
      <category>docker</category>
      <category>drone</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>Continuous Integration with Drone on Kubernetes</title>
      <dc:creator>Kamesh Sampath</dc:creator>
      <pubDate>Thu, 21 Jul 2022 11:21:00 +0000</pubDate>
      <link>https://dev.to/kameshsampath/continuous-integration-with-drone-on-kubernetes-1jil</link>
      <guid>https://dev.to/kameshsampath/continuous-integration-with-drone-on-kubernetes-1jil</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Over the past few years, lots of organizations have started to adopt Cloud Native architectures. Despite the adoption of Cloud Native architectures, many companies haven’t achieved optimal results. Wondering why? One of  the reasons is our adherence to traditional ways of building and deploying Cloud Native applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; has become the de facto Cloud Native deployment platform, solving one of the main Cloud Native problems: "deploying" applications quickly, efficiently and reliably. It offers radically easy scaling and fault tolerance. Despite this, not many Continuous Integration(CI) systems utilize the benefits of Kubernetes. None of the existing build systems offer the capabilities that are native to Kubernetes like in-cluster building, defining the build resources using CRDs, leveraging underlying security and access controls, etc. These missing features of Kubernetes made the Cloud Native architectures to be less effective and more complex.&lt;/p&gt;

&lt;p&gt;Let me introduce an Open Source project &lt;a href="https://drone.io" rel="noopener noreferrer"&gt;Drone&lt;/a&gt; --  a cloud native self-service Continuous Integration platform -- . 10 years ago old Drone was the first CI tool to leverage containers to run pipeline steps independent of each other., Today, with over 100M+ Docker pulls, and the most GitHub stars of any Continuous Integration solution, Drone offers a mature, Kubernetes based CI system harnessing the scaling and fault tolerance characteristics of Cloud Native architectures. Drone help solve the next part of the puzzle by running Kubernetes native in-cluster builds.&lt;/p&gt;

&lt;p&gt;In this blog, let see how we setup &lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt; and Drone together on our laptops to build Kubernetes native pipelines which could then be moved to cloud platforms like &lt;a href="https://harness.io/" rel="noopener noreferrer"&gt;Harness CI&lt;/a&gt; for a broader team based development.&lt;/p&gt;

&lt;p&gt;This blog is a tutorial where I explain the steps required to use KinD and Drone to set up CI with Kubernetes on your local machine. At the end of these steps, you will have a completely functional Kubernetes + CI setup that can help you build and deploy Cloud Native applications on to Kubernetes on your laptop.&lt;/p&gt;

&lt;h2&gt;
  
  
  Required tools
&lt;/h2&gt;

&lt;p&gt;To complete this setup successfully, we need the following tools on your laptop,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt; or Docker on Linux&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kind.sigs.k8s.io/" rel="noopener noreferrer"&gt;kind&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://helm.sh/" rel="noopener noreferrer"&gt;Helm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kustomize.io/" rel="noopener noreferrer"&gt;Kustomize&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Kubectl&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.man7.org/linux/man-pages/man1/envsubst.1.html" rel="noopener noreferrer"&gt;envsusbst&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All linux distributions adds &lt;strong&gt;envsubst&lt;/strong&gt; via &lt;a href="https://www.gnu.org/software/gettext/" rel="noopener noreferrer"&gt;gettext&lt;/a&gt; package. On macOS, it can be installed using &lt;a href="https://brew.sh/" rel="noopener noreferrer"&gt;Homebrew&lt;/a&gt; like &lt;code&gt;brew install gettext&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo Sources
&lt;/h2&gt;

&lt;p&gt;The accompanying code for this blog i.e. the demo sources is available on my &lt;a href="https://github.com/kameshsampath/drone-on-k8s" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;. Let us clone the same on to our machine,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/kameshsampath/drone-on-k8s &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$_&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PROJECT_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Through out this blog we will use the  name &lt;code&gt;$PROJECT_HOME&lt;/code&gt; to refer to demo sources folder that we cloned above .&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Alright, we are all set to get started!!&lt;/p&gt;

&lt;p&gt;## Setup Kubernetes Cluster&lt;/p&gt;

&lt;p&gt;As said earlier, we will use kind as our local Kubernetes cluster. But for this blog we will do the following customisations,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up a local container registry where we can push and pull container images that will be used in our Kubernetes. Check the kind &lt;a href="https://kind.sigs.k8s.io/docs/user/local-registry/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; for more details.&lt;/li&gt;
&lt;li&gt;Do extra port mappings to make allow us to access the Drone Server and &lt;a href="https://gitea.com/" rel="noopener noreferrer"&gt;Gitea&lt;/a&gt; git repository &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make things easier, all the aforementioned customisations has been compiled into a utility script &lt;a href="https://github.com/kameshsampath/drone-on-k8s/blob/main/bin/kind.sh" rel="noopener noreferrer"&gt;$PROJECT_HOME/bin/kind.sh&lt;/a&gt;. To start the KinD cluster with these customisations just do,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/bin/kind.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Version Control System
&lt;/h2&gt;

&lt;p&gt;Without &lt;em&gt;Version Control System(VCS)&lt;/em&gt; CI makes no sense. One of the primary goal of this blog is to show how to run local VCS  so that you can build your applications without a need for external VCS like GitHub, Gitlab etc.,  For our setup we will use on &lt;a href="https://gitea.com/" rel="noopener noreferrer"&gt;Gitea&lt;/a&gt; -- A painless, self-hosted Git service--. Gitea is so easy to set up and does provide &lt;a href="https://docs.gitea.io/en-us/install-on-kubernetes/" rel="noopener noreferrer"&gt;helm charts&lt;/a&gt; for Kubernetes based setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Helm Values
&lt;/h3&gt;

&lt;p&gt;The contents of the helm values &lt;a href="https://github.com/kameshsampath/drone-on-k8s/blob/main/helm_vars/gitea/values.yaml" rel="noopener noreferrer"&gt;file&lt;/a&gt; that will be used to setup Gitea is shown below. The settings are self-explanatory for more details check the &lt;a href="https://docs.gitea.io/en-us/config-cheat-sheet" rel="noopener noreferrer"&gt;cheat sheet&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# the Kubernetes service gitea-http'  service type&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt; 
    &lt;span class="c1"&gt;# the gitea-http service port&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
    &lt;span class="c1"&gt;# this port will be used in KinD extra port mappings to allow accessing the &lt;/span&gt;
    &lt;span class="c1"&gt;# Gitea server from our laptops&lt;/span&gt;
    &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30950&lt;/span&gt;
&lt;span class="na"&gt;gitea&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# the admin credentials to access Gitea typically push/pull operations&lt;/span&gt;
  &lt;span class="na"&gt;admin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# DON'T use username admin as its reserved and gitea will &lt;/span&gt;
    &lt;span class="c1"&gt;# fail to start&lt;/span&gt;
    &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo@123&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin@example.com&lt;/span&gt;
  &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# for this demo we will use http protocol to access Git repos&lt;/span&gt;
      &lt;span class="na"&gt;PROTOCOL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="c1"&gt;# the port gitea will listen on&lt;/span&gt;
      &lt;span class="na"&gt;HTTP_PORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
      &lt;span class="c1"&gt;# the Git domain - all the repositories will be using this domain&lt;/span&gt;
      &lt;span class="na"&gt;DOMAIN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gitea-127.0.0.1.sslip.io&lt;/span&gt;
      &lt;span class="c1"&gt;# The clone base url e.g. if repo is demo/foo the clone url will be &lt;/span&gt;
      &lt;span class="c1"&gt;# http://gitea-127.0.0.1.sslip.io:3000/demo/foo&lt;/span&gt;
      &lt;span class="na"&gt;ROOT_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://gitea-127.0.0.1.sslip.io:3000/&lt;/span&gt;
    &lt;span class="na"&gt;webhook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# since we will deploy to local network we will allow all hosts&lt;/span&gt;
      &lt;span class="na"&gt;ALLOWED_HOST_LIST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
      &lt;span class="c1"&gt;# since we are in http mode disable TLS&lt;/span&gt;
      &lt;span class="na"&gt;SKIP_TLS_VERIFY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the gitea helm repo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add gitea-charts https://dl.gitea.io/charts/
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command  to deploy Gitea,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--install&lt;/span&gt; gitea gitea-charts/gitea &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--values&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/helm_vars/gitea/values.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful deployment of gitea should show the following services in the &lt;em&gt;default&lt;/em&gt; namespace when running the command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods,svc &lt;span class="nt"&gt;-lapp&lt;/span&gt;.kubernetes.io/instance&lt;span class="o"&gt;=&lt;/span&gt;gitea

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                                  READY   STATUS    RESTARTS   AGE
pod/gitea-0                           1/1     Running   0          4m32s
pod/gitea-memcached-b87476455-4kqvp   1/1     Running   0          4m32s
pod/gitea-postgresql-0                1/1     Running   0          4m32s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE
service/gitea-http                  NodePort    10.96.55.25     &amp;lt;none&amp;gt;        3000:30950/TCP   4m32s
service/gitea-memcached             ClusterIP   10.96.176.235   &amp;lt;none&amp;gt;        11211/TCP        4m32s
service/gitea-postgresql            ClusterIP   10.96.59.23     &amp;lt;none&amp;gt;        5432/TCP         4m32s
service/gitea-postgresql-headless   ClusterIP   None            &amp;lt;none&amp;gt;        5432/TCP         4m32s
service/gitea-ssh                   ClusterIP   None            &amp;lt;none&amp;gt;        22/TCP           4m32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Environment Variables
&lt;/h2&gt;

&lt;p&gt;As a convenience let us set few environment variables which will be used by the commands in upcoming sections of the blog.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gitea
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Gitea domain&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITEA_DOMAIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gitea-127.0.0.1.sslip.io"&lt;/span&gt;
&lt;span class="c"&gt;# Gitea URL&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GITEA_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITEA_DOMAIN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:3000"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can access Gitea in your browser like open &lt;code&gt;${GITEA_URL}&lt;/code&gt;. Default credentials &lt;code&gt;demo/demo@123&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658029947645%2FhH6gMkXZs.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658029947645%2FhH6gMkXZs.png%2520align%3D" alt="Gitea Home" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Drone
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# the drone server host&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DRONE_SERVER_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"drone-127.0.0.1.sslip.io:8080"&lt;/span&gt;
&lt;span class="c"&gt;# the drone server web console&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DRONE_SERVER_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DRONE_SERVER_HOST&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Drone Gitea oAuth Application
&lt;/h2&gt;

&lt;p&gt;Drone will use Gitea for pulling/pushing the source code and to add &lt;a href="https://docs.gitea.io/en-us/webhooks/" rel="noopener noreferrer"&gt;webhooks&lt;/a&gt; to trigger builds on source change. To do these actions it requires an &lt;a href="https://en.wikipedia.org/wiki/OAuth" rel="noopener noreferrer"&gt;oAuth&lt;/a&gt; application to be configured on Gitea.  &lt;/p&gt;

&lt;p&gt;The demo sources has little utility called &lt;code&gt;gitea-config&lt;/code&gt; that helps in creating the oAuth application in Gitea and clone and create the &lt;a href="https://github.com/kameshsampath/drone-k8s-quickstart" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; repository as &lt;strong&gt;drone-quickstart&lt;/strong&gt; on Gitea. We will use  &lt;em&gt;drone-quickstart&lt;/em&gt;* repository to validate our setup.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/bin/gitea-config-darwin-arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GITEA_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-dh&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DRONE_SERVER_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:  Use gitea-config binary corresponding to your os and architecture. In the command above we used macOS arm64 binary&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038302029%2FftAF2w1Fc.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038302029%2FftAF2w1Fc.png%2520align%3D" alt="Drone Quickstart Repository" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038272231%2FtQ5s7jwus.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038272231%2FtQ5s7jwus.png%2520align%3D" alt="Drone Gitea oAuth2 Application" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038288303%2Fr4OTjkDFF.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658038288303%2Fr4OTjkDFF.png%2520align%3D" alt="Drone Gitea oAuth2 Application Details" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;gitea-config&lt;/code&gt; utility creates a &lt;code&gt;.env&lt;/code&gt; file under &lt;code&gt;$PROJECT_HOME/k8s&lt;/code&gt;  which has few Drone environment variables that will be used while deploying Drone server in upcoming steps,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;DRONE_GITEA_CLIENT_ID&lt;/code&gt;:  The Gitea oAuth Client ID&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_GITEA_CLIENT_SECRET&lt;/code&gt;: The Gitea oAuth Client Secret&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DRONE_RPC_SECRET&lt;/code&gt;: The unique secret to identify the server and runner, a simple generation like &lt;code&gt;openssl rand -hex 16&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy Drone
&lt;/h2&gt;

&lt;p&gt;For our demo the Drone server will be deployed on to a namespace called &lt;code&gt;drone&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create ns drone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add &lt;strong&gt;drone&lt;/strong&gt; helm repo,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add drone https://charts.drone.io
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following content will be used as helm values &lt;a href="https://github.com/kameshsampath/drone-on-k8s/blob/main/helm_vars/drone/values.yaml" rel="noopener noreferrer"&gt;file&lt;/a&gt; to deploy Drone server,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# the Drone Kubernetes service type&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="c1"&gt;# this port will be used in KinD extra port mappings to allow accessing the &lt;/span&gt;
  &lt;span class="c1"&gt;# drone server from our laptops&lt;/span&gt;
  &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30980&lt;/span&gt;

&lt;span class="na"&gt;extraSecretNamesForEnvFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="c1"&gt;# all the other as $PROJECT_HOME/k8s/.env variables are loaded via this secret&lt;/span&gt;
   &lt;span class="c1"&gt;# https://docs.drone.io/server/reference/&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;drone-demos-secret&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# the Drone server host typically what the drone runners will use to &lt;/span&gt;
  &lt;span class="c1"&gt;# communicate with the server&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_SERVER_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;drone-127.0.0.1.sslip.io:8080&lt;/span&gt;
  &lt;span class="c1"&gt;# Since we run Gitea in http mode we will skip TLS verification&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_GITEA_SKIP_VERIFY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="c1"&gt;# The url where Gitea could be reached, typically used while &lt;/span&gt;
  &lt;span class="c1"&gt;# cloning the sources&lt;/span&gt;
  &lt;span class="c1"&gt;# https://docs.drone.io/server/provider/gitea/&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_GITEA_SERVER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://gitea-127.0.0.1.sslip.io:3000/&lt;/span&gt;
  &lt;span class="c1"&gt;# For this local setup and demo we wil run Drone in http mode&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_SERVER_PROTO&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following helm command to deploy Drone server,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; drone drone/drone &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--values&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/helm_vars/drone/values.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;drone &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--post-renderer&lt;/span&gt;  k8s/kustomize &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful Drone deployment should show the following resources in &lt;em&gt;drone&lt;/em&gt; namespace,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods,svc,secrets &lt;span class="nt"&gt;-n&lt;/span&gt; drone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                         READY   STATUS    RESTARTS   AGE
pod/drone-5bb66b9d97-hbpl5   1/1     Running   0          9s

NAME            TYPE       CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE
service/drone   NodePort   10.96.184.123   &amp;lt;none&amp;gt;        8080:30980/TCP   9s

NAME                                 TYPE                 DATA   AGE
secret/drone-demos-secret            Opaque               3      9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Host Aliases
&lt;/h3&gt;

&lt;p&gt;As you have noticed we use &lt;a href="https://sslip.io/" rel="noopener noreferrer"&gt;Magic DNS&lt;/a&gt; for Gitea and Drone. This will cause name resolution issues inside the Drone and Gitea pods, because the url &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt; resolves to &lt;em&gt;127.0.0.1&lt;/em&gt; on the Drone server pod. But for our setup to work  we need &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt;  to be resolved to the &lt;strong&gt;gitea-http&lt;/strong&gt; Kubernetes service on our cluster. &lt;/p&gt;

&lt;p&gt;In order to achieve that we use the Kubernetes &lt;a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="noopener noreferrer"&gt;host aliases&lt;/a&gt; to add extra host entries(&lt;code&gt;/etc/hosts&lt;/code&gt;) in Drone pods that will resolve &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt; to the &lt;em&gt;ClusterIP&lt;/em&gt; of the &lt;strong&gt;gitea-http&lt;/strong&gt; service.&lt;/p&gt;

&lt;p&gt;There are multiple techniques that allows us to add host entires to Kubernetes deployments.  The first one we used in the earlier helm command to deploy Drone server is called &lt;a href="https://helm.sh/docs/topics/advanced/#usage" rel="noopener noreferrer"&gt;helm post renderer&lt;/a&gt;. The post renderer allowed us to patch the Drone deployment from the helm chart  with the hostAliases for &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt; resolving to &lt;strong&gt;gitea-http's&lt;/strong&gt; &lt;em&gt;ClusterIP&lt;/em&gt; address.&lt;/p&gt;

&lt;p&gt;As we did with Drone deployments to resolve the Gitea, we also need to make the Gitea pods resolve the Drone server when trying to send the &lt;a href="https://docs.gitea.io/en-us/webhooks/" rel="noopener noreferrer"&gt;webhook&lt;/a&gt; payload to trigger the build.  &lt;/p&gt;

&lt;p&gt;This time let us use &lt;a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="noopener noreferrer"&gt;kubectl patching&lt;/a&gt; technique to update the Gitea statefuleset deployments to resolve &lt;code&gt;drone-127.0.0.1.sslip.io&lt;/code&gt;  to &lt;em&gt;drone&lt;/em&gt; service's &lt;em&gt;ClusterIP&lt;/em&gt; .&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/kameshsampath/drone-on-k8s/blob/main/k8s/patch.json" rel="noopener noreferrer"&gt;patch&lt;/a&gt; that will be applied to the Gitea statefulset is as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"template"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"spec"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"hostAliases"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"ip"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${DRONE_SERVICE_IP}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"hostnames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"drone-127.0.0.1.sslip.io"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command to patch and update the gitea statefulset deployment,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DRONE_SERVICE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; drone drone &lt;span class="nt"&gt;-ojsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.clusterIP}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
kubectl patch statefulset gitea &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nt"&gt;--patch&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;envsubst&amp;lt;&lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/k8s/patch.json&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TIP&lt;/strong&gt;:  To replace the environment variables in the patch we use &lt;a href="https://www.man7.org/linux/man-pages/man1/envsubst.1.html" rel="noopener noreferrer"&gt;envsubst&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Wait for the Gitea pods to be updated and restarted,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status statefulset gitea &lt;span class="nt"&gt;--timeout&lt;/span&gt; 30s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the updates to the  gitea pod's &lt;code&gt;/etc/hosts&lt;/code&gt; file by running the command,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; gitea-0 &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nb"&gt;cat&lt;/span&gt; /etc/hosts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should have a entry like,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Entries added by HostAliases.&lt;/span&gt;
10.96.184.123   drone-127.0.0.1.sslip.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;em&gt;10.96.184.123&lt;/em&gt; is the &lt;strong&gt;drone&lt;/strong&gt; service &lt;em&gt;ClusterIP&lt;/em&gt; on my setup, run the following command to verify it,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; drone drone
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;          AGE
drone   NodePort   10.96.184.123   &amp;lt;none&amp;gt;        8080:30980/TCP   6m14s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can do similar checks with Drone pods and ensure that Drone pods &lt;code&gt;/etc/hosts&lt;/code&gt; has entry for &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt; mapping to &lt;strong&gt;gitea-http&lt;/strong&gt; &lt;em&gt;ClusterIP&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;What we did so far,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed a customized Kubernetes cluster using kind&lt;/li&gt;
&lt;li&gt;Deployed Gitea on to our Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Deployed Drone Server on to our Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Created an oAuth application on Gitea to authorize Drone server&lt;/li&gt;
&lt;li&gt;Create a repository on Gitea that will be used to test our step&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploy Drone Kubernetes Runner
&lt;/h2&gt;

&lt;p&gt;To run the Drone pipelines on Kubernetes we need deploy the &lt;a href="https://docs.drone.io/runner/kubernetes/overview/" rel="noopener noreferrer"&gt;Drone Kubernetes Runner&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;Deploy the &lt;code&gt;drone-runner-kube&lt;/code&gt; with following &lt;a href="https://github.com/kameshsampath/drone-on-k8s/blob/main/helm_vars/drone-runner-kube/values.yaml" rel="noopener noreferrer"&gt;values&lt;/a&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;extraSecretNamesForEnvFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="c1"&gt;# all the other as env variables are loaded via this secret&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;drone-demos-secret&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# the url to reach the Drone server&lt;/span&gt;
  &lt;span class="c1"&gt;# we point it to the local drone Kubernetes service drone on port 8080&lt;/span&gt;
  &lt;span class="na"&gt;DRONE_RPC_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drone:8080"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the helm install to deploy the &lt;code&gt;drone-runner-kube&lt;/code&gt;,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade &lt;span class="nt"&gt;--install&lt;/span&gt; drone-runner-kube drone/drone-runner-kube &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;drone &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--values&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_HOME&lt;/span&gt;/helm_vars/drone-runner-kube/values.yaml  &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--wait&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Querying the Kubernetes resources on &lt;em&gt;drone&lt;/em&gt; namespace should now return the &lt;code&gt;drone-runner-kube&lt;/code&gt; pod and service,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods,svc &lt;span class="nt"&gt;-n&lt;/span&gt; drone &lt;span class="nt"&gt;-lapp&lt;/span&gt;.kubernetes.io/name&lt;span class="o"&gt;=&lt;/span&gt;drone-runner-kube
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                                     READY   STATUS    RESTARTS   AGE
pod/drone-runner-kube-59f98956b4-mbr9c   1/1     Running   0          41s

NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
service/drone-runner-kube   ClusterIP   10.96.196.54   &amp;lt;none&amp;gt;        3000/TCP   41s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the Drone server web console in your browser using the URL &lt;code&gt;${DRONE_SERVER_URL}&lt;/code&gt; follow the on screen instructions to complete the registration and activation of our &lt;code&gt;drone-quickstart&lt;/code&gt; repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658033224439%2F4HOq0TrB2.gif%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658033224439%2F4HOq0TrB2.gif%2520align%3D" alt="Drone Registration" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets run our first pipeline
&lt;/h2&gt;

&lt;p&gt;Let us clone the quickstart repository to the folder of our choice on our local machine,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone http://gitea-127.0.0.1.sslip.io:3000/demo/drone-quickstart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:  The default git credentials to push is &lt;code&gt;demo/demo@123&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Open the &lt;code&gt;drone-quickstart&lt;/code&gt; project with your favorite editor, try to make some changes for e.g add some dummy text README to trigger a build.  Your build will fail as shown below,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658035032895%2F2iv3XBeh4.gif%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658035032895%2F2iv3XBeh4.gif%2520align%3D" alt="Drone Failed Build" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't worry, that's what we are going to fix now. We need to do the same thing of adding  &lt;strong&gt;hostAliases&lt;/strong&gt; to our drone pipeline pods as well update the &lt;code&gt;.drone.yml&lt;/code&gt; with  &lt;strong&gt;ClusterIP&lt;/strong&gt; of &lt;strong&gt;gitea-http&lt;/strong&gt; and &lt;em&gt;hostnames&lt;/em&gt; with entry for &lt;em&gt;gitea-127.0.0.1.sslip.io&lt;/em&gt; so that our Drone pipeline pods are able to clone the sources from our Gitea repository.&lt;/p&gt;

&lt;p&gt;The following snippet shows the updated &lt;code&gt;.drone.yml&lt;/code&gt; with entries for host aliases,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;:  Your &lt;em&gt;ClusterIP&lt;/em&gt; of &lt;strong&gt;gitea-http* may vary, to get the &lt;em&gt;ClusterIP&lt;/em&gt; of the **gitea-http&lt;/strong&gt; service run the command &lt;code&gt;kubectl get svc gitea-http -ojsonpath='{.spec.clusterIP}'&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pipeline&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;say hello&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo hello world&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;good bye hello&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo good bye&lt;/span&gt;

&lt;span class="c1"&gt;# updates to match your local setup&lt;/span&gt;
&lt;span class="na"&gt;host_aliases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# kubectl get svc gitea-http -ojsonpath='{.spec.clusterIP}'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.96.240.234&lt;/span&gt;
    &lt;span class="na"&gt;hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gitea-127.0.0.1.sslip.io"&lt;/span&gt;

&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;branch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Commit and push the code to trigger a new Drone pipeline build, and you will see it being successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658035440302%2FTBBPrmhBD.gif%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1658035440302%2FTBBPrmhBD.gif%2520align%3D" alt="Drone Successful Build" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;When you are done with experimenting, you can clean up the setup by running the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kind delete cluster &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;drone-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We now have a fully functional CI with Drone on Kubernetes. You no longer need to build your Cloud (Kubernetes) Native applications, but can Continuously Integrate with much ease and power.&lt;/p&gt;

&lt;p&gt;Just summarize what we did in this blog,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deployed a customized Kubernetes cluster using kind&lt;/li&gt;
&lt;li&gt;Deployed Gitea on to our Kubernetes cluster&lt;/li&gt;
&lt;li&gt;Deployed Drone Server on to our Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Created an oAuth application on Gitea to authorize drone&lt;/li&gt;
&lt;li&gt;Created a repository on Gitea that will be used to test our step&lt;/li&gt;
&lt;li&gt;Deployed Drone Kubernetes runner to run pipelines on Kubernetes Cluster&lt;/li&gt;
&lt;li&gt;Built our Quick start application using Drone pipelines on Kubernetes&lt;/li&gt;
&lt;li&gt;Leveraged Kubernetes &lt;a href="https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/" rel="noopener noreferrer"&gt;host aliases&lt;/a&gt; to  add host entries to our deployments to resolve local URLs&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>drone</category>
      <category>kind</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
