<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: pistocop</title>
    <description>The latest articles on DEV Community by pistocop (@pistocop).</description>
    <link>https://dev.to/pistocop</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pistocop"/>
    <language>en</language>
    <item>
      <title>🛠️ A gentle introduction to GKE private cluster deployment</title>
      <dc:creator>pistocop</dc:creator>
      <pubDate>Thu, 02 Mar 2023 07:33:25 +0000</pubDate>
      <link>https://dev.to/pistocop/a-gentle-introduction-to-gke-private-cluster-deployment-1jgb</link>
      <guid>https://dev.to/pistocop/a-gentle-introduction-to-gke-private-cluster-deployment-1jgb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;🧭  Study how to deploy GKE private cluster using terraform and expose an echo server&lt;/p&gt;

&lt;p&gt;🔗 Repo: &lt;a href="https://github.com/pistocop/gke-basic-cluster-deployment"&gt;https://github.com/pistocop/gke-basic-cluster-deployment&lt;/a&gt;&lt;br&gt;
⚠️ For production deployment use &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine"&gt;Terraform Kubernetes Engine Module&lt;/a&gt;&lt;br&gt;
📧 Found an error or have a question? &lt;a href="https://www.pistocop.dev/"&gt;write to me&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  📢 Intro
&lt;/h2&gt;

&lt;p&gt;Kubernetes (&lt;a href="https://raw.githubusercontent.com/pistocop/pistocop.dev-images/main/12_gentle_introduction_gke/k8s-shortcut.webp?token=GHSAT0AAAAAABYJJV2JXXQEFDFQLO4WTMWOY7ZYANQ"&gt;k8s&lt;/a&gt;), although don’t require my introduction, is the most famous and widely adopted container manager in the world. Hosting by yourself is undoubtedly a very advanced and expert topic, so the major of companies choose a provider that provides it as managed service.&lt;/p&gt;

&lt;p&gt;There are multiple famous k8s hosting services (e.g. &lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE&lt;/a&gt;, &lt;a href="https://aws.amazon.com/eks/"&gt;EKS&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-us/products/kubernetes-service"&gt;AKS&lt;/a&gt;, &lt;a href="https://www.pistocop.dev/posts/deploy_es_on_okteto/"&gt;Okteto&lt;/a&gt;), but is no doubt that one of the leading is Google Kubernetes Engine (&lt;a href="https://cloud.google.com/kubernetes-engine"&gt;GKE&lt;/a&gt;), a product provided by Google Cloud Platform (&lt;a href="https://cloud.google.com/"&gt;GCP&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;So can we simply open GKE, start a cluster and we are ready to go? Well… yes but no, it maybe could work until something further structured and production ready is required. So then the GKE settings and tweaks start to emerge and require to be addressed. Take for example the official &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine"&gt;https://github.com/terraform-google-modules/terraform-google-kubernetes-engine&lt;/a&gt; terraform deploy: there are a lot of parameters that can really change how GKE will be deployed and employed.&lt;/p&gt;

&lt;p&gt;So, simple things first: we will don’t go through all the &lt;a href="https://github.com/terraform-google-modules/terraform-google-kubernetes-engine#inputs"&gt;possible parameters&lt;/a&gt; but we will, here, see a basic deployment of GKE with a description of the main settings that a deployment should address.&lt;br&gt;
For better understanding - and code reusage - we deploy the system using &lt;code&gt;[terraform](https://www.terraform.io/)&lt;/code&gt;, this will give us the prospect to easily explain all the components with the unique clarity that belong to the code.&lt;/p&gt;
&lt;h2&gt;
  
  
  🚀 Deploy
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;[terraform](https://www.terraform.io/)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;[kubectl](https://kubernetes.io/docs/tasks/tools/)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;[gcloud](https://cloud.google.com/sdk/gcloud)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;[tfswitch](https://tfswitch.warrensbox.com/)&lt;/code&gt; - optional&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Download the repo &lt;a href="https://github.com/pistocop/gke-basic-cluster-deployment"&gt;https://github.com/pistocop/gke-basic-cluster-deployment&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Enable the following GCP APIs using the GCP console [1]

&lt;ul&gt;
&lt;li&gt;Open GCP console -&amp;gt; APIs &amp;amp; Services -&amp;gt; enable APIs and services&lt;/li&gt;
&lt;li&gt;Enable:

&lt;ul&gt;
&lt;li&gt;Compute Engine API&lt;/li&gt;
&lt;li&gt;Kubernetes Engine API&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Compile the input variables:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;iac/variables.tfvars.example iac/variables.tfvars
    &lt;span class="nv"&gt;$ &lt;/span&gt;vi iac/variables.tfvars &lt;span class="c"&gt;# replace with your data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Deploy the cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace\set the variables with your data
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;# configure gcloud to the desired project&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;gcloud config &lt;span class="nb"&gt;set &lt;/span&gt;project &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;

    &lt;span class="c"&gt;# configure terraform&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;iac
    &lt;span class="nv"&gt;$ &lt;/span&gt;tfswitch
    &lt;span class="nv"&gt;$ &lt;/span&gt;terraform init

    &lt;span class="c"&gt;# deploy the GKE pre-requisites&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt; out.plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"./variables.tfvars"&lt;/span&gt; &lt;span class="nt"&gt;-var&lt;/span&gt;  &lt;span class="nv"&gt;deploy_cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply out.plan

    &lt;span class="c"&gt;# deploy GKE - can take more than 20 minutes&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt; out.plan &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"./variables.tfvars"&lt;/span&gt; &lt;span class="nt"&gt;-var&lt;/span&gt; &lt;span class="nv"&gt;deploy_cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply out.plan

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Deploy the services into k8s&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace\set the variables with your data
&lt;/li&gt;
&lt;/ul&gt;

&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;# set kubectl context&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;gcloud container clusters get-credentials gkedeploy-cluster &lt;span class="nt"&gt;--zone&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_REGION&lt;/span&gt; &lt;span class="nt"&gt;--project&lt;/span&gt; &lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;

    &lt;span class="c"&gt;# create common resources&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s/common

    &lt;span class="c"&gt;# deploy the server&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; k8s/gechoserver/

    &lt;span class="c"&gt;# wait that the ADDRESS will be displayed - can take more than 10 minutes&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; dev get ingress &lt;span class="nt"&gt;-o&lt;/span&gt; wide
    NAME          CLASS    HOSTS   ADDRESS          PORTS   AGE
    gechoserver   &amp;lt;none&amp;gt;   &lt;span class="k"&gt;*&lt;/span&gt;       34.120.114.207   80      67s

    &lt;span class="c"&gt;# query the server from internet - can take  more than 10 minutes&lt;/span&gt;
    &lt;span class="c"&gt;# replace "34.120.114.207" with your address:&lt;/span&gt;
    &lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-XPOST&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &amp;lt;http://34.120.114.207/&amp;gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"foo=bar"&lt;/span&gt;

    &lt;span class="c"&gt;# ~ Congratulation, your server on GKE is up and running! ~&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Destroy the cluster:&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    $ cd iac
    $ terraform destroy -auto-approve -var-file="./variables.tfvars" -var deploy_cluster=true

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🏗️ Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;The k8s cluster provider is &lt;a href="https://cloud.google.com/kubernetes-engine?hl=it"&gt;GKE&lt;/a&gt; from Google Cloud Platform (GCP)&lt;/li&gt;
&lt;li&gt;The terraform state is stored only locally (e.g. no backend on &lt;a href="https://cloud.google.com/storage"&gt;GCS&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;GKE&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deployed as &lt;a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters"&gt;private cluster&lt;/a&gt;, so it depends only on internal IPs&lt;/li&gt;
&lt;li&gt;Deployed in VPC-native mode:

&lt;ul&gt;
&lt;li&gt;The traffic to a specific pod will be routed directly to the correct node thanks to the &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#container-native_load_balancing"&gt;container native LB&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;We deploy the Ingress back-end as &lt;code&gt;ClusterIP&lt;/code&gt; instead of &lt;code&gt;NodePort&lt;/code&gt; to leverage the &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#container-native_load_balancing"&gt;container native load balancing&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;The terraform variable &lt;code&gt;deploy_cluster&lt;/code&gt; steers the cluster creation, set to &lt;code&gt;false&lt;/code&gt; to create only the networks components&lt;/li&gt;
&lt;li&gt;Use a Service Account named &lt;code&gt;gkedeploy-sa&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Network&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No &lt;a href="https://cloud.google.com/nat/docs/overview"&gt;NAT&lt;/a&gt; will be deployed

&lt;ul&gt;
&lt;li&gt;Therefore the system cannot pull images from public container ◦ registries like Docker Hub, read more under &lt;a href="https://www.notion.so/gke-article-A-gentle-introduction-to-GKE-private-cluster-deployment-955712d768da4559a0c51a91c9e935de"&gt;Tips and Takeaways&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;VPC doesn't create a subnet for each zone&lt;/li&gt;
&lt;li&gt;Two subnetworks are provided:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;gkedeploy-subnet&lt;/code&gt;: with the range &lt;code&gt;10.10.0.0/24&lt;/code&gt; is the subnetwork where GKE nodes will be deployed

&lt;ul&gt;
&lt;li&gt;Instances within this network can access Google APIs and services by using &lt;a href="https://cloud.google.com/vpc/docs/private-google-access"&gt;Private Google Access&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;gkedeploy-lb-proxy-only-subnet&lt;/code&gt;: with the range &lt;code&gt;10.14.0.0/23&lt;/code&gt; is a &lt;a href="https://cloud.google.com/load-balancing/docs/proxy-only-subnets"&gt;proxy only subnet&lt;/a&gt; and is required by GCP to reserve a range of IPs used to deploy the Load Balancers&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;VPC-Native cluster alias IP range could be checked under "Console -&amp;gt; VPC network details -&amp;gt; secondary IPv4 ranges"

&lt;ul&gt;
&lt;li&gt;Under that field we find both the &lt;code&gt;cluster_ipv4_cidr_block&lt;/code&gt; (for pods - &lt;code&gt;10.11.0.0/21&lt;/code&gt;) and &lt;code&gt;services_ipv4_cidr_block&lt;/code&gt; (for services - &lt;code&gt;10.12.0.0/21&lt;/code&gt;) values,&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;GKE hosted (by Google) master's nodes will use the &lt;code&gt;10.13.0.0/28&lt;/code&gt; range, see &lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#master_ipv4_cidr_block"&gt;master_ipv4_cidr_block&lt;/a&gt; parameter&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🥪 Tips and Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Every component and setting described here is reported on the &lt;code&gt;terraform&lt;/code&gt; code, with also more insights, read the code to grasp those concepts&lt;/li&gt;
&lt;li&gt;We deploy GKE after other resources (this is why we had 2 &lt;code&gt;terraform&lt;/code&gt; plans) because otherwise, sometimes the GKE deployment remains infinitely stuck during the health check process, and terraform returns the error &lt;code&gt;Error: Error waiting for creating GKE cluster: All cluster resources were brought up, but [...]&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Maybe the error is due from SA not yet deployed/up&amp;amp;running, so try to deploy firstly all the resources using &lt;code&gt;var deploy_cluster=false&lt;/code&gt; and then deploy the cluster using terraform with &lt;code&gt;var deploy_cluster=true&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;To pull docker images without adding the NAT component we have two choices (memo: we have &lt;a href="https://cloud.google.com/vpc/docs/configure-private-google-access"&gt;Private Google Access&lt;/a&gt; enabled):&lt;/li&gt;
&lt;li&gt;Enable the &lt;a href="https://cloud.google.com/artifact-registry"&gt;Artifact Registry&lt;/a&gt; service on your GCP project and upload/mirror the desired images&lt;/li&gt;
&lt;li&gt;Chose the images to use from the &lt;a href="https://console.cloud.google.com/gcr/images/google-containers/GLOBAL"&gt;public Google Container Registry&lt;/a&gt; that has a Google allowed IP (e.g. the &lt;code&gt;k8s/gechoserver&lt;/code&gt; deployment)&lt;/li&gt;
&lt;li&gt;By default, you cannot reach GCE vm using &lt;code&gt;-tunnel-through-iap&lt;/code&gt; because the firewalls block that connection

&lt;ul&gt;
&lt;li&gt;We add &lt;code&gt;fw-iap&lt;/code&gt; firewall rule to terraform in order to use this GCP functionality, named &lt;a href="https://cloud.google.com/iap/docs/using-tcp-forwarding"&gt;IAP for TCP forwarding&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;[1] We can write Terraform code to enable the GCP APIs, but is opinionated that we should not&lt;/li&gt;
&lt;li&gt;On terraform, under the GKE section, why &lt;code&gt;master_ipv4_cidr_block&lt;/code&gt; is required?

&lt;ul&gt;
&lt;li&gt;because the k8s master(s) are managed by Google and a peering connection will be created from the Google network with the GKE network&lt;/li&gt;
&lt;li&gt;due to this connection, Google needs to know a free IP range used to assign IPs to the master's components&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;When deploying a k8s Service object, pay attention when defining UDP/TCP ports: wrong usages fail silently&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start 2 pods (&lt;code&gt;A&lt;/code&gt; and &lt;code&gt;B&lt;/code&gt;), declare ClusterIP on port &lt;code&gt;80&lt;/code&gt; for TCP connection&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the following code:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ~ With TCP:
# Client A:
$ nc -l -p 8080

# Client B:
$ nc network-multitool 8080
hello in TCP

# Client A:
hello in TCP # &amp;lt;-- msg received

# ~ With UDP:
# Client A:
$ nc -l -u -p 8081

# Client B:
$ nc -u network-multitool 8081
hello in UDP

# Client A:
&amp;lt;nothing&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔗 Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[1] Can I automatically enable APIs when using GCP cloud with terraform? - &lt;a href="https://stackoverflow.com/a/72306829"&gt;so&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;[2] Best managed kubernetes platform - &lt;a href="https://www.reddit.com/r/kubernetes/comments/yag39i/best_managed_kubernetes_platform/"&gt;reddit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Learn Terraform - Provision a GKE Cluster - &lt;a href="https://github.com/hashicorp/learn-terraform-provision-gke-cluster"&gt;gh&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Official GCP Terraform provider - &lt;a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs"&gt;doc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GKE Ingress for HTTP(S) Load Balancing - &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#container-native_load_balancing"&gt;doc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Network overview - &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#outside-cluster"&gt;doc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;VPC-native clusters - &lt;a href="https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips#cluster_sizing"&gt;doc&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;DNS on GKE: Everything you need to know - &lt;a href="https://medium.com/google-cloud/dns-on-gke-everything-you-need-to-know-b961303f9153"&gt;medium&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A trip with Google Global Load Balancers - &lt;a href="https://medium.com/google-developer-experts/a-trip-with-google-global-load-balancers-advanced-but-easy-f09b255d5a23"&gt;medium&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>kubernetes</category>
      <category>tutorial</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Deploy Elasticsearch 8.5 on Kubernetes with Okteto Cloud free plan</title>
      <dc:creator>pistocop</dc:creator>
      <pubDate>Fri, 18 Nov 2022 19:21:11 +0000</pubDate>
      <link>https://dev.to/pistocop/deploy-elasticsearch-85-on-kubernetes-with-okteto-cloud-free-plan-493g</link>
      <guid>https://dev.to/pistocop/deploy-elasticsearch-85-on-kubernetes-with-okteto-cloud-free-plan-493g</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Guide to deploy Elasticsearch 8.5 cluster on Okteto Cloud, for free and with basic security settings&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  📢 Intro
&lt;/h2&gt;

&lt;p&gt;Elasticsearch (ES) is probably the most common and famous search engine and has introduced a &lt;a href="https://www.elastic.co/blog/whats-new-elastic-8-0-0"&gt;lot of new features&lt;/a&gt; with the 8.0 release, like the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/dense-vector.html"&gt;dense vector&lt;/a&gt; field type and the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/knn-search.html"&gt;kNN search&lt;/a&gt;, that both combined allow Elasticsearch to be used for &lt;a href="https://www.elastic.co/blog/how-to-deploy-nlp-text-embeddings-and-vector-search"&gt;vector search&lt;/a&gt; for many machine-learning applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.okteto.com/"&gt;Okteto&lt;/a&gt; is an application that &lt;a href="https://github.com/okteto/okteto"&gt;allows you to develop inside a container&lt;/a&gt;, along with many features it permit the user to start a &lt;a href="https://www.okteto.com/docs/welcome/overview/"&gt;development environment&lt;/a&gt; and provide an &lt;a href="https://www.okteto.com/docs/cloud/ssl/"&gt;automatic SSL Endpoints&lt;/a&gt; for k8s.&lt;/p&gt;

&lt;p&gt;Unfortunately the new security system introduced by ES 8.0 has produced &lt;a href="https://github.com/elastic/helm-charts/issues/1594"&gt;problems with the official helm chart&lt;/a&gt;, so we cannot use the &lt;a href="https://www.okteto.com/docs/cloud/deploy-from-helm/"&gt;standard Okteto Chart deploy system&lt;/a&gt;. &lt;br&gt;
&lt;strong&gt;In this article we will see how deploy ES 8.x into kubernetes (k8s) using the Okteto Cloud as platform&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  ✨ Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Elasticsearch &lt;em&gt;8.5.0&lt;/em&gt; version&lt;/li&gt;
&lt;li&gt;Cluster composed of 3 nodes&lt;/li&gt;
&lt;li&gt;Deployable under the &lt;a href="https://www.okteto.com/pricing/"&gt;Okteto Cloud free tier&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Protected by Elasticsearch &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html#bootstrap-elastic-passwords"&gt;password&lt;/a&gt;, internode &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/master/configuring-tls.html"&gt;TLS&lt;/a&gt; and &lt;a href="https://www.okteto.com/docs/cloud/ssl"&gt;HTTPS connection&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Okteto &lt;a href="https://www.okteto.com/development-environments/"&gt;development environment&lt;/a&gt; based on &lt;code&gt;[busybox-curl](https://hub.docker.com/r/yauritux/busybox-curl)&lt;/code&gt; image&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🚀 Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create an &lt;a href="https://www.okteto.com/try-free/"&gt;Okteto account&lt;/a&gt;, install and configure the &lt;a href="https://www.okteto.com/docs/getting-started/"&gt;Okteto CLI&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Clone the &lt;a href="https://github.com/pistocop/okteto-elasticsearch"&gt;okteto-elasticsearch&lt;/a&gt; repo&lt;/li&gt;
&lt;li&gt;Generate the ES certificates:

&lt;ul&gt;
&lt;li&gt;Start Docker and run &lt;code&gt;$ bash scripts/certgen-launcher.sh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Deploy on Okteto

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;$ okteto deploy --build&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Check the created endpoint from the previous output&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Call the ES endpoint:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note: if not configured [1], &lt;code&gt;&amp;lt;your-password&amp;gt;&lt;/code&gt; value is &lt;code&gt;changeme&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl -XGET -u elastic:&amp;lt;your-password&amp;gt; https://&amp;lt;your-endpoint-created&amp;gt;.cloud.okteto.net/_cat/nodes\\?v

# Example:
$ curl -XGET -u elastic:changeme &amp;lt;https://es01-http-mynamespace.cloud.okteto.net/_cat/nodes\\?v&amp;gt;
ip          heap.percent ram.percent cpu load_1m load_5m load_15m node.role   master name
10.8.38.167            7          62  32    1.69    1.41     0.93 cdfhilmrstw *      es02
10.8.38.166           10          60  27    1.69    1.41     0.93 cdfhilmrstw -      es01
10.8.38.168           11          62  36    1.69    1.41     0.93 cdfhilmrstw -      es03

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Enjoy your cluster!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do you want to use &lt;a href="https://www.elastic.co/kibana/"&gt;Kibana&lt;/a&gt;? see [2]&lt;/li&gt;
&lt;li&gt;Don't waste free resources, if you don't need the cluster tear down everything with &lt;code&gt;$ okteto destroy -v&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  ✍️ Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Security is provided by:

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/master/secure-cluster.html"&gt;TLS internode&lt;/a&gt; communication with user-generated certificates&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.okteto.com/docs/cloud/ssl"&gt;HTTPS endpoint&lt;/a&gt; with Okteto managed certificates&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes

&lt;ul&gt;
&lt;li&gt;Instead of declaring directly the GKE ingress, we will use the Okteto provided auto SSL

&lt;ul&gt;
&lt;li&gt;Through the &lt;code&gt;dev.okteto.com/auto-ingress: "true"&lt;/code&gt; annotation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;We will create one &lt;code&gt;ClusterIP&lt;/code&gt; for each note for port &lt;code&gt;9300&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;Because ES uses that as the default port for internode communication&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  🔧 How to
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;[1] Change the default Elasticsearch password:

&lt;ul&gt;
&lt;li&gt;Generate the base64 new password

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;$ echo "NEW_PASSWORD" | tr -d \\\\n | base64 -w 0&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Open the the &lt;code&gt;k8s/elasticsearch.yml&lt;/code&gt; file

&lt;ul&gt;
&lt;li&gt;Use the generated value to replace the &lt;code&gt;ELASTIC_PASSWORD&lt;/code&gt; value of the &lt;code&gt;Secret&lt;/code&gt; component&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;[2] Run Kibana locally

&lt;ul&gt;
&lt;li&gt;🚧 Currently &lt;a href="https://github.com/pistocop/elastic-certified-engineer/tree/develop/dockerfiles/20_cluster8x-extenalkibana"&gt;WIP&lt;/a&gt;, waiting &lt;a href="https://github.com/elastic/elasticsearch/issues/89017"&gt;this ES issue&lt;/a&gt; will be resolved&lt;/li&gt;
&lt;li&gt;Run kibana locally and connect with Okteto cluster:

&lt;ul&gt;
&lt;li&gt;We run the docker locally to don't waste the okteto cloud resources&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  ⚒️ Okteto
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Development environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;We could test the internode network thanks to &lt;a href="https://www.okteto.com/docs/reference/development-environment"&gt;Okteto development environment&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start the busybox-curl pod
$ okteto up

# The pod is mounted with all the local files, including the certificates:
&amp;gt; ls -l /okteto/
Dockerfile  README.md   certs       k8s         okteto.yml  scripts

# The pod is deployed into the cluster and could use the certificates:
&amp;gt; curl -u elastic:changeme es-http:9200
{
  "name" : "es01",
  "cluster_name" : "okteto-cluster",
...

&amp;gt; nc -vz es01 9300
es01 (10.153.19.186:9300) open

&lt;/code&gt;&lt;/pre&gt;




&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sleeping system&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Q: "How can I restart a sleeping development environment?" - &lt;a href="https://www.okteto.com/pricing/?plan=SaaS"&gt;link&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;A: Visit any of the public endpoints of your development environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Okteto useful commands&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Log into the cluster
$ okteto init

# Deploy the local `okteto.yml` - wait 5/10m
$ okteto deploy --wait

# Activate a development container
# &amp;gt; &amp;lt;https://www.okteto.com/docs/reference/cli/#up&amp;gt;
$ okteto up

# Create kubectl context to Okteto cloud
$ okteto kubeconfig
$ kubectl get po

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🛂 Disclaimer
&lt;/h2&gt;

&lt;p&gt;This repository is built for side-project purposes and no warranties are provided.&lt;br&gt;
Activities to keep in mind before using in production environments includes but are not limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We will arbitrarily expose the &lt;code&gt;es01&lt;/code&gt; node as API server:

&lt;ul&gt;
&lt;li&gt;So we don't have load balancing between the API requests&lt;/li&gt;
&lt;li&gt;There is no guarantee that &lt;code&gt;es01&lt;/code&gt; isn't chosen as the master node&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Create a more robust ES architecture with dedicated ES master nodes&lt;/li&gt;
&lt;li&gt;Fine-tune the ES nodes' roles and HW requirements&lt;/li&gt;
&lt;li&gt;All the points listed in the &lt;a href="https://github.com/pistocop/okteto-elasticsearch#-todos"&gt;"TODO" section&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔗References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Github repository: &lt;a href="https://github.com/pistocop/okteto-elasticsearch"&gt;https://github.com/pistocop/okteto-elasticsearch&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Article on my blog: &lt;a href="https://www.pistocop.dev/posts/deploy_es_on_okteto/"&gt;https://www.pistocop.dev/posts/deploy_es_on_okteto/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Okteto documentation: &lt;a href="https://www.okteto.com/docs/welcome/overview/"&gt;https://www.okteto.com/docs/welcome/overview/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Elasticsearch documentation: &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html"&gt;https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>elasticsearch</category>
      <category>okteto</category>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
