<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: richamishra006</title>
    <description>The latest articles on DEV Community by richamishra006 (@richamishra006).</description>
    <link>https://dev.to/richamishra006</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/richamishra006"/>
    <language>en</language>
    <item>
      <title>Simplifying Kubernetes Native Testing with TestKube</title>
      <dc:creator>richamishra006</dc:creator>
      <pubDate>Wed, 26 Jul 2023 13:44:34 +0000</pubDate>
      <link>https://dev.to/richamishra006/simplifying-kubernetes-native-testing-with-testkube-20cn</link>
      <guid>https://dev.to/richamishra006/simplifying-kubernetes-native-testing-with-testkube-20cn</guid>
      <description>&lt;p&gt;Testing methodology refers to the process and procedures used to evaluate the quality, correctness, and completeness of software applications. The importance of testing methodology lies in its ability to identify defects and issues early in the development process, thus reducing the time and effort to fix them later. By identifying the defects early, testing can help to improve the functionality and reliability of software applications.&lt;/p&gt;

&lt;p&gt;There are numerous software testing tools available in the market that serve different purposes and testing requirements. Popular testing tools including Selenium, JMeter, Appium, SoapUI, Postman, and Cucumber might not be a suitable option to test a cloud native application.&lt;/p&gt;

&lt;p&gt;Testing cloud native applications is different from normal testing as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cloud native applications are typically built using a distributed and microservices-based architecture. This means testing the interactions and integration between various microservices and ensuring seamless collaboration. In contrast, traditional applications often have a monolithic architecture, so you have to focus on testing individual components.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cloud native applications abstract the underlying infrastructure using services like Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). Testing cloud native applications requires verifying compatibility and functionality across different cloud providers and configurations. Traditional applications are usually developed and tested within a specific infrastructure environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Due to the complex nature of cloud native applications, a cloud native testing framework is a better choice as it enables developers and testers to build and test applications designed to run on cloud platforms. Cloud native testing tools are &lt;a href="https://www.infracloud.io/cloud-native-consulting/"&gt;developed specifically for cloud native environments&lt;/a&gt;. Most importantly, they allow you to deploy tests in your clusters and make the executions super scalable. Plus, they are not coupled to any CI/CD framework such as Jenkins, GitHub Actions, etc.&lt;/p&gt;

&lt;p&gt;One such cloud native testing framework is Testkube - which will be the focus of this blog post. We will cover what Testkube is, its capabilities, and how you can run tests in your Kubernetes cluster using it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Testkube?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://testkube.kubeshop.io/"&gt;Testkube&lt;/a&gt; is an open source, cloud native testing framework that simplifies the testing of Kubernetes applications. It allows users to store, execute and manage tests as part of a Kubernetes cluster. With Testkube, users can utilize multiple testing frameworks, orchestrate tests and perform automated testing. Additionally, it has a user-friendly web-based UI and CLI (command line interface) for better visibility and management.&lt;/p&gt;

&lt;p&gt;Now we have an understanding of Testkube and how it simplifies the testing of Kubernetes applications, let’s dive into its capabilities and use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Capabilities of Testkube
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run your tests inside your cluster:&lt;/strong&gt; Testkube enables users to run their tests directly inside their Kubernetes cluster instead of executing them from a CI pipeline. This provides a significant &lt;a href="https://www.infracloud.io/cloud-native-networking-services/"&gt;networking security&lt;/a&gt; advantage, as it eliminates the need to expose the cluster to the outside world in order to test its applications. By keeping the testing environment within the cluster, Testkube helps to ensure the integrity and security of the testing process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Support for multiple testing frameworks:&lt;/strong&gt; With a flexible test execution framework that comes with pre-built executors for popular testing tools like Postman, &lt;a href="https://www.infracloud.io/blogs/introduction-cypress-ui-test-automation/"&gt;Cypress&lt;/a&gt;, and K6, Testkube enables users to leverage existing testing assets. It also provides the option to create custom executors for any type of tests users want to run within their cluster.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test orchestration:&lt;/strong&gt; Testkube empowers users to orchestrate multiple tests, making it easier to manage complex testing scenarios.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration with CI/CD:&lt;/strong&gt; You can integrate Testkube with CI/CD tool of your choice including &lt;a href="https://www.infracloud.io/jenkins-consulting-support/"&gt;Jenkins&lt;/a&gt;, TravisCI, CircleCI and make it easier to automate testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Centralized dashboard and storage of test artifacts:&lt;/strong&gt; Testkube provides web-based UI, CLI, and a centralized dashboard for testing which helps in better visibility and management of tests. It enables you to retrieve the results of the tests generated by the testing tools from its storing facilities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When it comes to use cases, Testkube can be used by developers, DevOps teams, and QA teams for testing Kubernetes applications at various stages of the development process, from local development to production. For instance, developers can use Testkube to test their application logic, while &lt;a href="https://www.infracloud.io/devops-consulting-services/"&gt;expert DevOps teams&lt;/a&gt; can use it to automate tests in their CI/CD pipeline. QA teams can use Testkube to &lt;a href="https://www.infracloud.io/kubernetes-consulting-partner/"&gt;ensure that Kubernetes applications are functioning correctly&lt;/a&gt; in different scenarios.&lt;/p&gt;

&lt;p&gt;Now we know what Testkube can do, we will see it in action. We will set it up to create and run a few tests, learn about test triggers, and use &lt;a href="https://www.infracloud.io//blogs/github-actions-demystified/"&gt;GitHub Actions that will allow us to enable automated testing&lt;/a&gt; with each PR.&lt;/p&gt;

&lt;p&gt;Let's proceed with the installation of Testkube.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testkube installation
&lt;/h2&gt;

&lt;p&gt;First, we will install Testkube CLI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Installation of Testkube CLI
&lt;/h3&gt;

&lt;p&gt;We will use the script installation method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sSLf https://get.testkube.io | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the above command, Testkube CLI will be installed.&lt;br&gt;
Next, we will install the cluster component using this CLI.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Install Testkube cluster components using Testkube's CLI
&lt;/h3&gt;

&lt;p&gt;We can also use &lt;a href="https://www.infracloud.io/kubernetes-school/basics-of-helm/create-kubernetes-helm-chart/"&gt;Helm chart&lt;/a&gt; to deploy the Testkube cluster component. However, here we will be using Testkube CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;testkube init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just run the above command, and the components will be installed. Make sure that the kubeconfig file points to the desired cluster where the Testkube needs to be installed.&lt;/p&gt;

&lt;p&gt;The following components will be installed, once we run the above command:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Testkube API&lt;/li&gt;
&lt;li&gt;Testkube namespace&lt;/li&gt;
&lt;li&gt;CRDs for Tests, TestSuites, Executors&lt;/li&gt;
&lt;li&gt;MongoDB - It is used to store test results and other Testkube configurations. &lt;/li&gt;
&lt;li&gt;Minio - default (can be disabled with &lt;code&gt;--no-minio&lt;/code&gt;) - Minio is used as a storage server to store the test artifacts&lt;/li&gt;
&lt;li&gt;NATS - It is used as a event bus to store all the test related events&lt;/li&gt;
&lt;li&gt;Dashboard - default (can be disabled with &lt;code&gt;--no-dashboard&lt;/code&gt; flag)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To know more about the setup configurations, refer to the official &lt;a href="https://docs.testkube.io/articles/step2-installing-cluster-components"&gt;Testkube installation documentaion&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, let’s check if all the components are deployed by running the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get pods -n testkube
NAME                                                    READY   STATUS    RESTARTS   AGE
testkube-api-server-76cb76dd9c-9tvmr                    1/1     Running   0          3m59s
testkube-dashboard-56c78fdc79-nw7cj                     1/1     Running   0          3m59s
testkube-minio-testkube-bd549c85d-rrwlq                 1/1     Running   0          3m59s
testkube-mongodb-d78699775-48jkm                        1/1     Running   0          3m59s
testkube-nats-0                                         3/3     Running   0          3m59s
testkube-nats-box-5b555bc9c4-w9zfj                      1/1     Running   0          3m59s
testkube-operator-controller-manager-7cb8cdcbb9-smcqr   2/2     Running   0          3m59s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, all components are running fine. Next, we will be creating and running tests using Testkube.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create and run tests
&lt;/h2&gt;

&lt;p&gt;I have created a basic Django application which prints “The install worked successfully” available at &lt;a href="https://github.com/infracloudio/testkube-django-app/tree/master"&gt;this repo&lt;/a&gt;. For setting up this application, I am using minikube enabled with ingress. So first we will deploy this application in our cluster and then will create tests for the same.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Clone the repository and create a pod and service by running the below commands.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/infracloudio/testkube-django-app.git
cd myproject
kubectl apply -f deployment.yaml -f service.yaml -f ingress.yaml
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Once we run the above command, a pod, service and ingress will be created.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get pods | grep myproject
myproject-7777f9f588-xbcnr   1/1     Running   0             60s
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the ingress is created, we will update the /etc/hosts with the hostname and IP of the ingress as shown in the screenshot below so that we can access the application endpoint at django-test.com.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get ingress
NAME                 CLASS   HOSTS             ADDRESS        PORTS   AGE
django-app-ingress   nginx   django-test.com   192.168.49.2   80      17h
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iMOm9aLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6onejvutll1wmqu5y9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iMOm9aLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6onejvutll1wmqu5y9y.png" alt="Update hostname and IP" width="583" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the application pod is created, we will create integration and load tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration tests
&lt;/h3&gt;

&lt;p&gt;I have written a sample integration test in Go, so we will use &lt;a href="https://docs.testkube.io/test-types/executor-ginkgo"&gt;Ginkgo executor&lt;/a&gt; which supports the tests written in Go.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;For creating the test, run the below command.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   kubectl testkube create test \
     --git-uri https://github.com/infracloudio/testkube-django-app.git \
     --git-path myproject/examples/ \
     --type ginkgo/test \
     --name django-app-test \
     --git-branch master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;git-uri is the url of GitHub repository where the test case exists&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Once we run the above command, a test will be created. We can check it by running a command as shown below.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get tests.tests.testkube.io -A
NAMESPACE   NAME         AGE
testkube    django-app-test   31m
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next, we will access the Testkube UI by running the below command. Once we execute this command, a web page will open on the default browser at localhost:8080&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;testkube dashboard
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--henOSeRg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xld744c8awwool9fvn2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--henOSeRg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xld744c8awwool9fvn2.png" alt="Web page will open after command is executed" width="698" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now we can see that the test is created, so we will run the tests by clicking on Run now option as shown below.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m2R54SqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98qut7gkbt5pnbagh7bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m2R54SqE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98qut7gkbt5pnbagh7bg.png" alt="Run the tests by clicking on Run now option" width="701" height="305"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After we click on Run now, the test will be triggered and we can see the output if the test has passed or not.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uQZNjEcx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmck5xb8d3j85lg4wyug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uQZNjEcx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmck5xb8d3j85lg4wyug.png" alt="Test output" width="701" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we can see that the tests have passed.&lt;/p&gt;

&lt;p&gt;Similarly, we can see an overview of the past results on the dashboard, along with other details like execution duration, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--J64rlE21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgszvttz9dy9ufy37ejv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--J64rlE21--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgszvttz9dy9ufy37ejv.png" alt="Test run history on dashboard" width="742" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Load tests
&lt;/h3&gt;

&lt;p&gt;Now we will use the Testkube &lt;a href="https://docs.testkube.io/test-types/executor-k6/"&gt;K6 executor&lt;/a&gt; and perform a load test for the same application we deployed above.&lt;br&gt;
The load test script is available as &lt;a href="https://github.com/infracloudio/testkube-django-app/blob/master/myproject/loadtest.js"&gt;loadtest.js file in the demo repo&lt;/a&gt;. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You can run the following command to create the test.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   testkube create test --name k6-test --type k6/script \
     --test-content-type git-dir \
     --git-uri https://github.com/infracloudio/testkube-django-app.git \
     --git-branch master \
     --git-path myproject/ \
     --executor-args "--vus" \
     --executor-args "5" --executor-args "--duration" \
     --executor-args "10m" --executor-args loadtest.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Here vus stands for virtual users. We will create the test with 5 users for 10m duration.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;After executing the above command, let's check if the test has been created.&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;➜ kubectl get tests.tests.testkube.io -A
NAMESPACE   NAME              AGE
testkube    django-app-test   5h52m
testkube    k6-test           9s
3. Once the test has been created, we will run it from the UI and check the output, we need to wait for a few minutes to get the load generated for 5 users and then check the output.
&lt;/code&gt;&lt;/pre&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xLUpB2e3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7q7uuo9k5l6ufkvbfuxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xLUpB2e3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7q7uuo9k5l6ufkvbfuxl.png" alt="Load test ran successfully" width="701" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we can see from the screenshot that the load test ran successfully with 5 virtual users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test triggers
&lt;/h2&gt;

&lt;p&gt;Testkube allows us to automate running tests and TestSuites by defining triggers on certain events for various Kubernetes resources. A trigger defines an action that will be executed for a given execution when a certain event on a specific resource occurs. For example, we could define a trigger that runs a test when a configmap gets modified or a pod gets restarted.&lt;/p&gt;

&lt;p&gt;To test out this feature, we will create a yaml file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: tests.testkube.io/v1
kind: TestTrigger
metadata:
  name: testtrigger-example
  namespace: testkube
spec:
  resource: configmap
  resourceSelector:
    labelSelector:
      matchLabels:
        app: test
  event: modified
  action: run
  execution: test
  testSelector:
    name: django-app-test
    namespace: testkube


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

kubectl apply -f testtrigger.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will check if the test trigger is created by running this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

➜ kubectl get testtriggers.tests.testkube.io -A
NAMESPACE   NAME                  RESOURCE    EVENT      EXECUTION   AGE
testkube    testtrigger-example   configmap   modified   test        48m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We will create a  configmap with label app=test and as soon as we modify the configmap, the test will be triggered. So in this way, we can trigger the required tests based on certain conditions like the rollout of deployment, configmap modification, pod restart, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with CI/CD
&lt;/h2&gt;

&lt;p&gt;Here comes the most interesting part of this post – we can trigger tests in Testkube from the CI/CD pipeline. This practice decouples running your tests from the CI/CD, removes all the complexity and speeds up your CI.&lt;/p&gt;

&lt;p&gt;In this blog post, we will be using Github Actions which enables us to run the Testkube CLI commands in a GitHub workflow. So first we will talk about setting up access to EKS from GH (GitHub) actions. We are using EKS in this case, but any other cloud platforms can also be used.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-requisites
&lt;/h3&gt;

&lt;p&gt;For using Github Actions to perform tasks on the cluster, we should be ready with the below points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An EKS cluster&lt;/li&gt;
&lt;li&gt;An IAM user with permission of EKS cluster&lt;/li&gt;
&lt;li&gt;Update the aws-auth configmap on the cluster with username and role arn as shown in the screenshot below&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7NHRaQY8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49urtuptl5shasmtt7c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7NHRaQY8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49urtuptl5shasmtt7c9.png" alt="Update aws-auth configmap on cluster" width="700" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that to keep the post simple, we are mapping &lt;code&gt;testkube&lt;/code&gt; user to &lt;code&gt;system:masters&lt;/code&gt; group. For an actual setup, consider creating a specific ClusterRole/ClusterRoleBinding for a group and mapping that group with &lt;code&gt;testkube&lt;/code&gt; user to grant only minimal access. Take a look at the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html"&gt;EKS docs&lt;/a&gt; for details, or use a &lt;a href="https://developer.ibm.com/tutorials/use-kubernetes-service-accounts-to-enable-automated-kubectl-access/"&gt;Service Account token based kubeconfig&lt;/a&gt; for GitHub Actions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to create a few secrets in GitHub settings under the Secrets section as shown in the screenshot below. The access key and secret key should be of the user generated in the second step. We will be using these secrets as environment variables in the GitHub actions configuration file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n0NfCBbo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjl3rw4kr7nbdmwpotux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n0NfCBbo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjl3rw4kr7nbdmwpotux.png" alt="Create secrets in Github" width="712" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;We need to create a &lt;code&gt;.github/workflow/testkube.yaml&lt;/code&gt;  from the Actions tab. Testkube.yaml contains two jobs, job1 will establish connectivity with EKS cluster and job2 will create tests using Testkube.&lt;br&gt;
Here is the content of testkube.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

name: Testkube Docker Action
on:
 push:
 pull_request:
   branches:
     - master
env:
 AWS_ACCESS_KEY_ID: $`{{ secrets.AWS_ACCESS_KEY_ID }}`
 AWS_SECRET_ACCESS_KEY: $`{{ secrets.AWS_SECRET_ACCESS_KEY }}`
 CLUSTER_NAME: $`{{ secrets.CLUSTER_NAME }}`
jobs:
 job_1:
   name: test workflow for AWS
   runs-on: ubuntu-latest
   steps:
   - name: AWS cli install action
     uses: chrislennon/action-aws-cli@1.1
   - name: Configure AWS credentials
     uses: aws-actions/configure-aws-credentials@v1
     with:
       aws-access-key-id: $`{{ secrets.AWS_ACCESS_KEY_ID }}`
       aws-secret-access-key: $`{{ secrets.AWS_SECRET_ACCESS_KEY }}`
       aws-region: ap-south-1
   - name: kubeconfig
     run: |
       aws eks update-kubeconfig --name $`{{ env.CLUSTER_NAME }}` --region $`{{ env.AWS_REGION }}`
   - name: Upload kubeconfig file
     uses: actions/upload-artifact@v3
     with:
       name: kubeconfig
       path: /home/runner/.kube/config
 job_2:
   name: create test using testkube
   needs: job_1
   runs-on: ubuntu-latest 
   steps:
   - name: Download kubeconfig from job 1
     uses: actions/download-artifact@v3
     with:
      name: kubeconfig
   - shell: bash
     run: |
       export KUBECONFIG=/home/runner/work/testkube-test/testkube-test/config
   - name: Create test
     id: create_test
     uses: kubeshop/testkube-docker-action@v1.3
     env:
       KUBECONFIG: /home/runner/work/testkube-test/testkube-test/config
       KUBERNETES_MASTER: /home/runner/work/testkube-test/testkube-test/config
     with:
       command: create
       resource: test
       namespace: testkube
       parameters: "--git-uri https://github.com/infracloudio/testkube-django-app.git --git-path myproject/examples/ --type ginkgo/test --name testkube-github-action --git-branch master"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you wish, you can try creating another test or use the same example used in this blog post. Once you push the changes, the workflow will get triggered. We can check it in the Actions section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SZ3hku6q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zt5d6k54fb94mnoj6c8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SZ3hku6q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zt5d6k54fb94mnoj6c8w.png" alt="Workflow triggered" width="749" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the job is completed successfully, you will see that the test has been created in the cluster.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

➜ kubectl get tests -A
NAMESPACE   NAME                     AGE
testkube    testkube-github-action   18m


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Testkube is a test execution and orchestration framework for Kubernetes. It has the capability to run any testing tool and integrates with any CI/CD application. It defines tests as Kubernetes CRDs to provide a modern solution for managing all your tests, &lt;a href="https://www.infracloud.io/ci-cd-consulting/"&gt;eliminating CI/CD bottlenecks&lt;/a&gt;, and scaling your tests with your needs.&lt;/p&gt;

&lt;p&gt;In this blog post, we have focused mainly on the Testkube installation, test creation and execution (for a demo application), test triggers, and integration with CI/CD. With basic groundwork covered, you can run your first test using Testkube.&lt;/p&gt;

&lt;p&gt;I hope you found this post informative and engaging. For more posts like this one, do subscribe to our newsletter for a weekly dose of cloud native. I’d love to hear your thoughts on this post, let’s connect and start a conversation on &lt;a href="https://www.linkedin.com/in/richa-mishra-5028061b4/"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Looking to be truly cloud native? Learn why so many startups and enterprise trust InfraCloud as one of the &lt;a href="https://dev.to/cloud-native-consulting/"&gt;best cloud native consulting services provider&lt;/a&gt; for Kubernetes adoption and Day 2 operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.testkube.io/"&gt;Testkube official documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubeshop/testkube"&gt;Testkube GitHub repo&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Postgres data migration from VM server to Kubernetes</title>
      <dc:creator>richamishra006</dc:creator>
      <pubDate>Wed, 30 Nov 2022 06:58:53 +0000</pubDate>
      <link>https://dev.to/richamishra006/postgres-data-migration-from-vm-server-to-kubernetes-1a1g</link>
      <guid>https://dev.to/richamishra006/postgres-data-migration-from-vm-server-to-kubernetes-1a1g</guid>
      <description>&lt;p&gt;PostgreSQL is a powerful, open-source object-relational database system that safely stores and scales data workloads. It uses SQL and is widely used due to its reliability, data integrity, robustness, and many more features. While the world is moving towards Kubernetes to run their entire application ecosystem, migrating the stateful side of the ecosystem to Kubernetes is still not as straightforward. One has to be very sure of how without any data loss and downtime, we can migrate the databases to Kubernetes.&lt;/p&gt;

&lt;p&gt;Being one of the widely used and popular relational databases, PostgreSQL DB  can run on various platforms such as macOS, Windows, Linux, etc. We can set up PostgreSQL on virtual machines, on cloud or physical machines or use containerized databases . &lt;/p&gt;

&lt;p&gt;In this blog post, we will be discussing a scenario, where we will learn how to migrate PostgreSQL data from a virtual machine to Kubernetes setup . While there are various ways to install postgres on Kubernetes, the best approach is to always look for an operator which can take away most of the operational and administration burden. One such operator is Kubergres, which we are going to use due to its simplicity and features. To know more about Kubegres, check out their official documentation&lt;/p&gt;

&lt;h1&gt;
  
  
  What is Kubegres? [Postgres operator for k8s]
&lt;/h1&gt;

&lt;p&gt;Kubegres is a Kubernetes operator which helps us to deploy clusters of PostgreSQL instances with data replication and failover enabled out of the box. It brings simplicity when using PostgreSQL with Kubernetes. &lt;/p&gt;

&lt;h2&gt;
  
  
  Features:
&lt;/h2&gt;

&lt;p&gt;It provides data replication, and replicates data from primary PostgreSQL instance to replica instances in real time&lt;br&gt;
It manages failover by promoting the replica instance as primary PostgreSQL in case of failure&lt;br&gt;
It has option to schedule backup of database and dump the data in a separate persistent volume&lt;/p&gt;

&lt;p&gt;Having learnt about the features of Kubgres, let's proceed with the installation.&lt;/p&gt;
&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;One needs to have a Postgres database installed and running on a virtual machine. Also, access to a Kubernetes cluster will be required where we'll be migrating the database to. &lt;/p&gt;
&lt;h1&gt;
  
  
  Installation of Kubegres
&lt;/h1&gt;

&lt;p&gt;Kubegres is an open source tool. We can deploy it on Kubernetes by using the Kubegres operator &lt;/p&gt;
&lt;h2&gt;
  
  
  Install Kubegres operator
&lt;/h2&gt;

&lt;p&gt;Run the following command in a Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/reactive-tech/kubegres/v1.15/kubegres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After executing the above command, a kubegres-system namespace will be created, where the controller will be installed. we will see a deployment running in the kubegres-system namespace.&lt;/p&gt;

&lt;p&gt;We will see the below output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace/kubegres-system created
customresourcedefinition.apiextensions.k8s.io/kubegres.kubegres.reactive-tech.io created
serviceaccount/kubegres-controller-manager created
role.rbac.authorization.k8s.io/kubegres-leader-election-role created
clusterrole.rbac.authorization.k8s.io/kubegres-manager-role created
clusterrole.rbac.authorization.k8s.io/kubegres-metrics-reader created
clusterrole.rbac.authorization.k8s.io/kubegres-proxy-role created
rolebinding.rbac.authorization.k8s.io/kubegres-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubegres-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/kubegres-proxy-rolebinding created
configmap/kubegres-manager-config created
service/kubegres-controller-manager-metrics-service created
deployment.apps/kubegres-controller-manager created

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Secret resource
&lt;/h2&gt;

&lt;p&gt;Before creating a cluster for PostgreSQL, we will create a secret with the password of Postgres user, replication user and production database user. We will be using this later in the PostgreSQL configuration&lt;/p&gt;

&lt;p&gt;Create a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi postgresql-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: postgres-production-secret
  namespace: default
type: Opaque
data:
  superUserPassword: S2p4dVhpUmlLVWNBVj0=
  replicationUserPassword: cmVwbGljYXRpb24=
  password: cHJvZHVjdGlvbg==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f postgresql-secret.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a ConfigMap
&lt;/h2&gt;

&lt;p&gt;Next we will create a configmap,as Kubegres allows us to override its default configurations and bash scripts based on our requirements. The base configmap contains the default configs of all Kubegres resources in that namespace. &lt;br&gt;
We can override the following configurations by creating our configmap: &lt;br&gt;
postgres.conf- the official PostgreSQL configs used for both primary and replica servers.&lt;br&gt;
pg_hba.conf- the host-based authentication configs for both primary and replica servers. &lt;br&gt;
primary_init_script.sh- a bash script which runs for the first time when a primary PostgreSql container is created. Here we can add instructions to create custom databases.&lt;br&gt;
backup_database.sh- this bash script defines the actions to perform during a backup. It is executed regularly by a dedicated cronjob&lt;/p&gt;

&lt;p&gt;Create a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi postgres-configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-configmap
  namespace: default
data:

  postgres.conf: |2
    # Replication configs
    listen_addresses = '*'
    max_wal_senders = 10
    max_connections = 5000
    shared_buffers = 128MB
    wal_level= logical
    log_min_duration_statement= 5000


    # Logging
    log_destination = 'stderr,csvlog'
    logging_collector = on
    log_directory = 'pg_log'
    log_filename= 'postgresql-%Y-%m-%d_%H%M%S.log'


  pg_hba.conf: |

    # TYPE  DATABASE        USER            ADDRESS                 METHOD
    # Replication connections by a user with the replication privilege
    host    replication     replication     all                     md5
    # As long as it is authenticated, all connections allowed except from "0.0.0.0/0"
    local   all             all                                     md5
    host    all             all             all                     md5
    host    all             all             0.0.0.0/0               reject
    host production production 127.0.0.1/32 trust
    host production production ::1/128 trust
    host production production 0.0.0.0/0 trust
    host production postgres 127.0.0.1/32 trust

  primary_init_script.sh: |
    #!/bin/bash
    set -e

    # This script assumes that the env-var $POSTGRES_MY_DB_PASSWORD contains the password of the custom user to create.
    # You can add any env-var in your Kubegres resource config YAML.

    dt=$(date '+%d/%m/%Y %H:%M:%S');
    echo "$dt - Running init script the 1st time Primary PostgreSql container is created...";

    customDatabaseName="production"
    customUserName="production"

    echo "$dt - Running: psql -v ON_ERROR_STOP=1 --username $POSTGRES_USER --dbname $POSTGRES_DB ...";

    psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" &amp;lt;&amp;lt;-EOSQL
    CREATE DATABASE $customDatabaseName;
    CREATE USER $customUserName WITH PASSWORD '$POSTGRES_PRODUCTION_PASSWORD';
    GRANT ALL PRIVILEGES ON DATABASE "$customDatabaseName" to $customUserName;
    EOSQL

    echo "$dt - Init script is completed";

  backup_database.sh: |
    #!/bin/bash
    set -e

    dt=$(date '+%d/%m/%Y %H:%M:%S');
    fileDt=$(date '+%d_%m_%Y_%H_%M_%S');
    backUpFileName="$KUBEGRES_RESOURCE_NAME-backup-$fileDt.gz"
    backUpFilePath="$BACKUP_DESTINATION_FOLDER/$backUpFileName"

    echo "$dt - Starting DB backup of Kubegres resource $KUBEGRES_RESOURCE_NAME into file: $backUpFilePath";
    echo "$dt - Running: pg_dumpall -h $BACKUP_SOURCE_DB_HOST_NAME -U postgres -c | gzip &amp;gt; $backUpFilePath"

    pg_dumpall -h $BACKUP_SOURCE_DB_HOST_NAME -U postgres -c | gzip &amp;gt; $backUpFilePath

    if [ $? -ne 0 ]; then
      rm $backUpFilePath
      echo "Unable to execute a BackUp. Please check DB connection settings"
      exit 1
    fi

    echo "$dt - DB backup completed for Kubegres resource $KUBEGRES_RESOURCE_NAME into file: $backUpFilePath";

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f postgres-configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to create a cluster of PostgreSQL, we need to create a Kubegres resource. We will be creating a file “kubegres.yaml” with the below content. This yaml file will create  one primary PostgreSql pod and two replica PostgreSql pods. The data will be replicated in real time from the Primary pod to the 2 Replica pods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgresql-backup-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: standard-lrs
  resources:
    requests:
      storage: 35Gi

---
apiVersion: kubegres.reactive-tech.io/v1
kind: Kubegres
metadata:
  name: production-postgresql
  namespace: default
  annotations:
    imageregistry: "https://hub.docker.com/"
    nginx.ingress.kubernetes.io/affinity: cookie

spec:
  replicas: 3
  image: postgres:13.2
  port: 5432

  database:
     size: 50G
     storageClassName: standard-lrs
     volumeMount: /var/lib/postgresql/data

  customConfig: postgres-configmap

  failover:
     isDisabled: false

  backup:
     schedule: "0 */2 * * *"
     pvcName: postgresql-backup-pvc
     volumeMount: /var/lib/backup

  env:
     - name: POSTGRES_PRODUCTION_PASSWORD
       valueFrom:
         secretKeyRef:
           name: postgres-production-secret
           key: password
     - name: POSTGRES_PASSWORD
       valueFrom:
          secretKeyRef:
             name: postgres-production-secret
             key: superUserPassword

     - name: POSTGRES_REPLICATION_PASSWORD
       valueFrom:
          secretKeyRef:
             name: postgres-production-secret
             key: replicationUserPassword


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above file, for creating a persistent volume claim, use the storage class as per your setup ( replace standard-lrs with the storage class available in your cluster). Also, you can update the image version of Postgres with the latest image.&lt;/p&gt;

&lt;p&gt;Apply the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f kubegres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we run the above command, three replicas of PostgreSQL will spin up. In our file, we have also scheduled a database backup of the database, which will be stored in a separate persistent volume.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
NAME                        READY   STATUS    RESTARTS   AGE
production-postgresql-1-0   1/1     Running   0          2m31s
production-postgresql-2-0   1/1     Running   0          2m20s
production-postgresql-3-0   1/1     Running   0          2m6s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have our Kubegres pods up and running, we will create a sample database with few tables in the PostgreSQL VM and try to migrate it to the Kubegres setup in Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Sample Database and data on Postgres(VM) server
&lt;/h1&gt;

&lt;p&gt;Considering PostgreSQL server is already running on a virtual machine, we will create a database “production”, insert some data into it, and then verify the same after migration is done.&lt;/p&gt;

&lt;p&gt;First, we will check if the Postgres service is up and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres@richa-mishra:~$ systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
     Active: active (exited) since Mon 2022-07-25 08:51:59 IST; 9h ago
    Process: 1233 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
   Main PID: 1233 (code=exited, status=0/SUCCESS)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will create a database called “production”. Switch to user &lt;code&gt;postgres&lt;/code&gt; and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;psql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will log in into the database with &lt;code&gt;postgres&lt;/code&gt; user and \create a database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# CREATE DATABASE production;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's connect to this production database and insert some data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;postgres=# \c production
You are now connected to database "production" as user "postgres".
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;create a table and add some data into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;production=#CREATE TABLE employee (
    employee_id INT PRIMARY KEY,
    first_name VARCHAR (255) NOT NULL,
    last_name VARCHAR (255) NOT NULL,
    manager_id INT,
    FOREIGN KEY (manager_id) 
    REFERENCES employee (employee_id) 
    ON DELETE CASCADE
);

production=#INSERT INTO employee (
    employee_id,
    first_name,
    last_name,
    manager_id
)
VALUES
    (1, 'Sandeep', 'Jain', NULL),
    (2, 'Abhishek ', 'Kelenia', 1),
    (3, 'Harsh', 'Aggarwal', 1),
    (4, 'Raju', 'Kumar', 2),
    (5, 'Nikhil', 'Aggarwal', 2),
    (6, 'Anshul', 'Aggarwal', 2),
    (7, 'Virat', 'Kohli', 3),
    (8, 'Rohit', 'Sharma', 3);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;production=# SELECT * FROM employee;
 employee_id | first_name | last_name | manager_id 
-------------+------------+-----------+------------
           1 | Sandeep    | Jain      |           
           2 | Abhishek   | Kelenia   |          1
           3 | Harsh      | Aggarwal  |          1
           4 | Raju       | Kumar     |          2
           5 | Nikhil     | Aggarwal  |          2
           6 | Anshul     | Aggarwal  |          2
           7 | Virat      | Kohli     |          3
           8 | Rohit      | Sharma    |          3
(8 rows)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have a database called production, a table and some data in it. After migration we will verify if the same data exists on Kubegres. For performing data migration, we will be performing a few steps for establishing network connectivity between PostgreSQL on VM and Kubegres on Kubernetes. Once the connectivity is established, we will be running a job which will dump the data from the VM and restore it to the database on Kubernetes. Lets perform this activity step by step as explained below. &lt;/p&gt;

&lt;h1&gt;
  
  
  Migration of data from Postgres Server to Kubegres:
&lt;/h1&gt;

&lt;p&gt;We will edit pg_hba.conf file of PostgreSQL(VM) and add an entry to allow connections from Kubegres, basically will try to establish an authentication between kubegres and Postgres on VM&lt;br&gt;
So switch to &lt;code&gt;postgres&lt;/code&gt; user and run the below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/postgresql/13/main/pg_hba.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case the version of PostgreSQL(VM) is 13, so the folder name is 13.&lt;/p&gt;

&lt;p&gt;We will add the following line having IP address of Kubegres:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;host    production     all    172.18.0.2/32         trust

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After making changes to pg_hba.conf, make sure to restart the postgresql service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart postgresql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we will create a Service and an endpoint to establish connectivity between Postgres setup on VM and Kubernetes.&lt;br&gt;
Services when used with a corresponding Endpoints object and without a selector can help us to connect the Postgres on VM ( backends outside the cluster). So we will create a service and an endpoint object manually.&lt;br&gt;
When you create an Endpoints object for a Service, you set the name of the endpoint to be the same as that of the Service.&lt;br&gt;
In the Endpoint, under addresses we will mention the IP address of Postgres(VirtualMachine)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi external-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the below contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: pg
spec:
  clusterIP: None
  ports:
  - name: postgres
    port: 5432
    targetPort: 5432
  selector: null
---
apiVersion: v1
kind: Endpoints
metadata:
  name: pg
subsets:
- addresses:
  - ip: 192.168.1.41
  ports:
  - name: postgres
    port: 5432
    protocol: TCP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f external-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we will create a configmap, which will contain all details of our setup as follows:&lt;br&gt;
DB:  Database name which needs to be migrated. We have already created a new database on Kubernetes setup with the same name “production” at the time of installation.&lt;br&gt;
USERNAME: username/role having access of the DB&lt;br&gt;
PASSWORD: password of the username &lt;br&gt;
SRC: It refers to the host name of Postgres(VM), so here we will add the external service name and namespace which we created in above step&lt;br&gt;
DST: It refers to the hostname of Kubegres setup&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi migration-cm.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-migration-cm
  labels:
    app: production-postgresql
data:
    DB: production
    USER: postgres
    PGPASSWORD: KjxuXiRiKUcAV=
    #     svcname.namespace
    SRC: "pg.default"
    #     podname.servicename[.namespace]
    DST: "production-postgresql-1-0.production-postgresql"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f migration-cm.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we have all the details of source and destination database in the configmap which we created above, we will run the migration job&lt;br&gt;
Create a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim job.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the below contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: batch/v1
kind: Job
metadata:
  name: populate-db
spec:
  backoffLimit: 0
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: pg
        image: crunchydata/crunchy-postgres:centos8-13.5-4.7.4
        command: ["/bin/sh"]
        args:
        - -c
        - "pg_dump -h $SRC -U $USER --format c $DB | pg_restore --verbose -h $DST -U $USER --format c --dbname $DB"
        envFrom:
        - configMapRef:
            name: postgres-migration-cm

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f job.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we run the above command, a job will be created and a container will start spinning, We will see the output as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods | grep populate
NAME                        READY   STATUS      RESTARTS      AGE
populate-db-txgkt           0/1     Completed   0             2s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;we will check the logs of this container. If there is some issue with the connectivity or any errors, you will be able to see them in the logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl logs populate-db-txgkt
pg_restore: connecting to database for restore
pg_restore: creating TABLE "public.employee"
pg_restore: processing data for table "public.employee"
pg_restore: creating CONSTRAINT "public.employee employee_pkey"
pg_restore: creating FK CONSTRAINT "public.employee employee_manager_id_fkey"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we can see in the logs that database connectivity and restore is successful.&lt;/p&gt;

&lt;p&gt;To verify if the same table and data has been created in the Kubegres pod, we will exec into the pod and verify the data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it production-postgresql-1-0 bash
root@production-postgresql-1-0:/# su postgres
postgres@production-postgresql-1-0:/$ cd
postgres@production-postgresql-1-0:~$ psql -d production
Password for user postgres: 
psql (14.1 (Debian 14.1-1.pgdg110+1))
Type "help" for help.
production=# \dt
          List of relations
 Schema |   Name   | Type  |  Owner   
--------+----------+-------+----------
 public | employee | table | postgres
(1 row)

production=#  SELECT * FROM employee;
 employee_id | first_name | last_name | manager_id 
-------------+------------+-----------+------------
           1 | Sandeep    | Jain      |           
           2 | Abhishek   | Kelenia   |          1
           3 | Harsh      | Aggarwal  |          1
           4 | Raju       | Kumar     |          2
           5 | Nikhil     | Aggarwal  |          2
           6 | Anshul     | Aggarwal  |          2
           7 | Virat      | Kohli     |          3
           8 | Rohit      | Sharma    |          3
(8 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will execute the above commands, step by step.&lt;br&gt;
First we will exec into the pod by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it production-postgresql-1-0 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we will switch to user postgres and type psql -d production to connect to the production database. Once connected after providing the right credentials, we will be running the same query which we ran in the sample database section, to verify the consistency of data.&lt;br&gt;
SELECT * FROM employee; &lt;br&gt;
This query lists the same data which we have inserted in Postgres VM.&lt;/p&gt;

&lt;p&gt;So the data has been transferred successfully. Now based on your setup, you can switch the application connectivity from Postgres VM to Kubegres setup which will basically require a merge request with few changes, once the MR is merged the application will start connecting to the kubegres setup, without any downtime and without any data loss.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;These were the few steps to transfer the data of a database from a Virtual Machine to Kubernetes setup. Using these steps, we could migrate a production database in VM to kubernetes setup with zero downtime and data loss.&lt;br&gt;
This blog post can be helpful even if you use any other database instead of PostgreSQL and are looking for a migration solution from a VM to Kubernetes setup.&lt;br&gt;
To understand more about Kubegres, Please refer to their official doc and Github page.&lt;/p&gt;

&lt;p&gt;That’s all for this post. If you are working on a similar scenario and need some assistance, feel free to reach out to me via LinkedIn, you can also read other blog posts that I’ve written on &lt;a href="https://www.infracloud.io/blogs/" rel="noopener noreferrer"&gt;InfraCloud&lt;/a&gt;. I’m always excited to hear thoughts and feedback from my readers!&lt;/p&gt;

</description>
      <category>python</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
