<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergey Pronin</title>
    <description>The latest articles on DEV Community by Sergey Pronin (@spronin).</description>
    <link>https://dev.to/spronin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/spronin"/>
    <language>en</language>
    <item>
      <title>Exposing replica nodes in Percona Operator for PostgreSQL</title>
      <dc:creator>Sergey Pronin</dc:creator>
      <pubDate>Thu, 24 Oct 2024 20:53:33 +0000</pubDate>
      <link>https://dev.to/spronin/exposing-replica-nodes-in-percona-operator-for-postgresql-1fmh</link>
      <guid>https://dev.to/spronin/exposing-replica-nodes-in-percona-operator-for-postgresql-1fmh</guid>
      <description>&lt;p&gt;This is a micro blog post about a feature that I failed to praise in &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/index.html" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt; - ability to expose replicas separately. &lt;/p&gt;

&lt;p&gt;The problem is that when you connect to the PostgreSQL cluster through pgBouncer, it routes all you reads and writes to the primary.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzznc9e14tuxmacfh1rq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzznc9e14tuxmacfh1rq0.png" alt="Percona Operator for PostgreSQL - pgbouncer connection" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So in that case the role pgBouncer plays here:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pool connections&lt;/li&gt;
&lt;li&gt;Provide single entry point to connect to&lt;/li&gt;
&lt;li&gt;In case of primary failure reroute quesries to the new primary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But we heard it a few times, that it is not good enough. Sometimes you want to have your application to scale reads and not hammer your primary.&lt;/p&gt;

&lt;p&gt;Starting version 2.4.0 we allow users to expose replica nodes separately. It is happening by default for all clusters already. You can see -replicas service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)          AGE
cluster1-replicas   ClusterIP      10.118.234.58   &amp;lt;none&amp;gt; 
5432:31812/TCP   11h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To change the service type, user should alter &lt;code&gt;spec.expose&lt;/code&gt; section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  expose:
...
  exposeReplicas:
    type: LoadBalancer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this exposure your application connection is going to look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ijll3vzsru39kvttdtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ijll3vzsru39kvttdtr.png" alt="Percona Operator for PostgreSQL - exposing replicas" width="800" height="744"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Percona offers various software options to run PostgreSQL:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;for regular deployments, use battle-tested, performant and reliable &lt;a href="https://www.percona.com/postgresql/software/postgresql-distribution" rel="noopener noreferrer"&gt;Percona Distribution for PostgreSQL&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;deploy, manage, scale your databases in Kubernetes with open source &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/index.html" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;want to get RDS-like experience, but with no vendor-lock and fully open source - get slick UI and API with &lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;;&lt;/li&gt;
&lt;li&gt;for cloud-based deployments use our SaaS offering - &lt;a href="https://ivee.cloud" rel="noopener noreferrer"&gt;Ivee by Percona&lt;/a&gt; (currently in Beta).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And of course if you are looking for help, support, professional support - let our team know: &lt;a href="https://www.percona.com/about/contact" rel="noopener noreferrer"&gt;https://www.percona.com/about/contact&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>postgres</category>
      <category>operators</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Introducing Ivee in Beta or the story of databases UX</title>
      <dc:creator>Sergey Pronin</dc:creator>
      <pubDate>Tue, 01 Oct 2024 16:34:52 +0000</pubDate>
      <link>https://dev.to/spronin/introducing-ivee-in-beta-or-the-story-of-databases-ux-11el</link>
      <guid>https://dev.to/spronin/introducing-ivee-in-beta-or-the-story-of-databases-ux-11el</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;We are introducing the early beta version of Ivee by Percona - a fully managed database-as-a-service. It is multi-cloud, supports multiple database technologies and backed by our expertise. &lt;/p&gt;

&lt;p&gt;&lt;a&gt;Try it completely for free&lt;/a&gt;&lt;br&gt;
Learn more on the website: &lt;a href="https://ivee.cloud" rel="noopener noreferrer"&gt;https://ivee.cloud&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The story
&lt;/h2&gt;

&lt;p&gt;I joined Percona 4 years ago and my first assignment was to do Product Management for Kubernetes Operators. Back then we were just starting with the Operators for MySQL and for MongoDB. Now we also have PostgreSQL and one more Operator for MySQL (which provides other replication methods).&lt;/p&gt;

&lt;p&gt;The goal was simple - automate the provisioning and management of databases anywhere. 4 years ago databases on Kubernetes was a greenfield. That was the bet for the future, the market was already going there. &lt;/p&gt;

&lt;p&gt;Kubernetes Operators' adoption was sky rocketing, mostly because of the two things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Kubernetes adoption itself&lt;/li&gt;
&lt;li&gt;User experience (UX)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To explain a bit on the UX - Operators provided a way not only to deploy databases, but also automate the management (various day-2 operations). With these capabilities, Operators were a good replacement for managed services. On top of it, Percona expertise was playing a huge role in shaping how Operators deploy database clusters. All the knowledge that was accumulated for years are embedded into the logic that Operators execute.&lt;/p&gt;

&lt;p&gt;Of course there were some tradeoffs, for example lack of a proper UI and the need to learn the basics of k8s. &lt;/p&gt;

&lt;h3&gt;
  
  
  Percona Everest
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt; - an open source database platform - is a solution to close the gap between managed services even further. It relies on the Kubernetes Operators, but provides graphical UI and APIs to simplify the experience even more. &lt;/p&gt;

&lt;p&gt;It went generally available in June and we've seen a lot of traction and interest for it. Lyrid - a cloud provider - &lt;a href="https://www.einpresswire.com/article/730151944/lyrid-launches-open-source-database-as-a-service-platform-based-on-percona-everest" rel="noopener noreferrer"&gt;recently announced&lt;/a&gt; that they are using Percona Everest to create their own DBaaS. And it is after 3 months of us going GA!&lt;/p&gt;

&lt;h3&gt;
  
  
  Ivee
&lt;/h3&gt;

&lt;p&gt;As a product person, I'm always thriving to reduce the friction and provide the value to the user faster. It is quite challenging with installable software. That is how we came up with an idea that we should have our own managed DBaaS. You will see our adherence to Percona standards - multi-database, multi-cloud, powered by our expertise and open source technologies. &lt;/p&gt;

&lt;p&gt;We launched &lt;a href="https://ivee.cloud" rel="noopener noreferrer"&gt;Ivee&lt;/a&gt; in Beta. Our main goal is to capture feedback and iterate fast. I would be extremely excited if you try it out (oh yeah, it is free) and give me your feedback.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://app.ivee.cloud/signup" rel="noopener noreferrer"&gt;Sign up here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>kubernetes</category>
      <category>ux</category>
      <category>ivee</category>
    </item>
    <item>
      <title>Running Gitea with PostgreSQL on Kubernetes</title>
      <dc:creator>Sergey Pronin</dc:creator>
      <pubDate>Wed, 04 Sep 2024 08:45:22 +0000</pubDate>
      <link>https://dev.to/spronin/running-gitea-with-postgresql-on-kubernetes-323c</link>
      <guid>https://dev.to/spronin/running-gitea-with-postgresql-on-kubernetes-323c</guid>
      <description>&lt;p&gt;&lt;a href="https://gitea.com/" rel="noopener noreferrer"&gt;Gitea&lt;/a&gt; - an open source tool for hosting version control using Git as well as other collaborative features like bug tracking, code review, continuous integration and more. It is quite easy to deploy it in Kubernetes - it has a &lt;a href="https://gitea.com/gitea/helm-chart" rel="noopener noreferrer"&gt;helm chart&lt;/a&gt; that will also install Redis and PostgreSQL databases through bitnami helm charts. &lt;/p&gt;

&lt;p&gt;But this initial deployment with helm charts is hard to manage and scale for production, as it requires proper database management and day-2 operations. To get there, we will see how somebody can deploy Gitea with &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/index.html" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt; (this is a follow up for &lt;a href="https://forums.percona.com/t/public-schema-options-helm-chart/32892" rel="noopener noreferrer"&gt;this question on the forum&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy PostgreSQL
&lt;/h2&gt;

&lt;p&gt;To simplify, we will use helm chart to deploy the Operator. See instructions in &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/helm.html" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-operator percona/pg-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once done, we are ready to deploy the database. But there is a caveat with the user management. Starting with PostgreSQL version 15 CREATE permissions are revoked from all users except database owner from the public (or default) schema (see &lt;a href="https://www.postgresql.org/about/news/postgresql-15-released-2526/" rel="noopener noreferrer"&gt;release notes&lt;/a&gt;). This change will cause the problem that the user on the forum is facing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2024/09/03 16:35:38 cmd/migrate.go:40:runMigrate() [F] Failed to initialize ORM engine: migrate: sync: pq: permission denied for schema public
Gitea migrate might fail due to database connection...This init-container will try again in a few seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To fix that, we will need to create a user and also add permissions to this user. To automate permission handling, we can use &lt;code&gt;databaseInitSQL&lt;/code&gt; section.&lt;/p&gt;

&lt;p&gt;Create an init-sql ConfigMap resource:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% cat init-sql-cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: init-sql
data:
  init.sql: |
    \c gitea
    GRANT CREATE ON SCHEMA public TO "gitea";
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the config map:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f init-sql-cm.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;init.sql&lt;/code&gt; will connect to the &lt;code&gt;gitea&lt;/code&gt; database and grant the user &lt;code&gt;gitea&lt;/code&gt; access to &lt;code&gt;SCHEMA public&lt;/code&gt;. Feel free to use any other schema. &lt;/p&gt;

&lt;p&gt;Now our &lt;code&gt;values.yaml&lt;/code&gt; for the operator is going to look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;users:
  - name: gitea
    databases:
      - gitea

databaseInitSQL:
  key: init.sql
  name: init-sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install cluster1 percona/pg-db -f values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploy Gitea
&lt;/h2&gt;

&lt;p&gt;Gitea's helm &lt;a href="https://gitea.com/gitea/helm-chart/src/branch/main/values.yaml" rel="noopener noreferrer"&gt;values.yaml&lt;/a&gt; is huge. But to connect to the PostgreSQL cluster deployed with the Operator, we just need to alter the following sections:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# define database
gitea:
  config:
    database:
      DB_TYPE: postgres
      HOST: cluster1-pg-db-pgbouncer.default.svc:5432
      USER: gitea
      NAME: gitea
      SSL_MODE: require
      PASSWD: MYPASSWORD
      SCHEMA: public

# disable built-in postgresql
postgresql:
  enabled: false
postgresql-ha:
  enabled: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now follow the installation instructions from &lt;a href="https://docs.gitea.com/installation/install-on-kubernetes" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add gitea-charts https://dl.gitea.com/charts/
helm install gitea gitea-charts/gitea -f values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What is next
&lt;/h2&gt;

&lt;p&gt;It is important to acknowledge that user management in the Percona Operator for PostgreSQL does not really deliver its end of the bargain. Users are created, but you can't start using them right away, as they don't have permissions to use &lt;code&gt;public&lt;/code&gt; schema. This breaks a lot of declarative flows. &lt;/p&gt;

&lt;p&gt;In the following release of the Operator we are going to fix it, by automatically creating per-user schemas (similar to what CrunchyData PGO &lt;a href="https://access.crunchydata.com/documentation/postgres-operator/latest/tutorials/basic-setup/user-management#automatically-creating-per-user-schemas" rel="noopener noreferrer"&gt;is doing&lt;/a&gt;). &lt;/p&gt;

&lt;p&gt;The issue is going to be fixed in &lt;a href="https://perconadev.atlassian.net/browse/K8SPG-634" rel="noopener noreferrer"&gt;K8SPG-634&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>kubernetes</category>
      <category>operations</category>
      <category>gitea</category>
    </item>
    <item>
      <title>From Zero to Hero: Disaster Recovery for PostgreSQL with Streaming Replication in Kubernetes</title>
      <dc:creator>Sergey Pronin</dc:creator>
      <pubDate>Fri, 17 May 2024 11:59:53 +0000</pubDate>
      <link>https://dev.to/spronin/from-zero-to-hero-disaster-recovery-for-postgresql-with-streaming-replication-in-kubernetes-2g96</link>
      <guid>https://dev.to/spronin/from-zero-to-hero-disaster-recovery-for-postgresql-with-streaming-replication-in-kubernetes-2g96</guid>
      <description>&lt;p&gt;In today’s digital landscape, disaster recovery is essential for any business. As our dependence on data grows, the impact of system outages or data loss becomes more severe, leading to major business interruptions and financial setbacks.&lt;/p&gt;

&lt;p&gt;Managing disaster recovery becomes even more complex with multi-cloud or multi-regional PostgreSQL deployments. Percona Operators offer a solution to simplify this process for PostgreSQL clusters running on Kubernetes. The Operator allows businesses to handle multi-cloud or hybrid-cloud PostgreSQL deployments effortlessly, ensuring that crucial data remains accessible and secure, no matter the circumstances.&lt;/p&gt;

&lt;p&gt;This article will guide you through setting up disaster recovery using &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/index.html" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt; and streaming replication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Design
&lt;/h2&gt;

&lt;p&gt;The design is simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two sites - main and DR (disaster recovery). 

&lt;ul&gt;
&lt;li&gt;It can be two regions, data centers or even two namespaces&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;In each site we have an Operator and a PostgreSQL cluster

&lt;ul&gt;
&lt;li&gt;In the DR site the cluster is in Standby mode&lt;/li&gt;
&lt;li&gt;We set up streaming replication between these two clusters&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgofkb41364vk1x1poh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgofkb41364vk1x1poh4.png" alt="Image description" width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set it up
&lt;/h2&gt;

&lt;p&gt;All examples in this blog post are as usual available in &lt;a href="https://github.com/spron-in/blog-data/tree/master/pg-k8s-streaming-dr" rel="noopener noreferrer"&gt;blog-data/pg-k8s-streaming-dr&lt;/a&gt; github repository.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes cluster or clusters (depending on your topology)&lt;/li&gt;
&lt;li&gt;Percona Operator for PostgreSQL deployed. 

&lt;ul&gt;
&lt;li&gt;See &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/quickstart.html" rel="noopener noreferrer"&gt;quickstart guides&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Or just use the bundle.yaml that I have in the repository above:
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/spron-in/blog-data/master/pg-k8s-streaming-dr/bundle.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Primary
&lt;/h2&gt;

&lt;p&gt;The only specific thing for the Main cluster is that you need to expose it, so that standby can connect to the primary node. To expose the primary node, use the &lt;code&gt;spec.expose&lt;/code&gt; section:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  ...
  expose:
    type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use a Service type of your choice. In my case, I have two clusters in different namespaces, so &lt;code&gt;ClusterIP&lt;/code&gt; is sufficient. Deploy the cluster as usual:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f main-cr.yaml -n main-pg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The service that you should use for connecting to standby is called &lt;code&gt;&amp;lt;clustername&amp;gt;-ha&lt;/code&gt; (&lt;code&gt;main-ha&lt;/code&gt; in my case):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;main-ha          ClusterIP   10.118.227.214   &amp;lt;none&amp;gt;        5432/TCP   163m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Standby
&lt;/h2&gt;

&lt;h3&gt;
  
  
  TLS certificates
&lt;/h3&gt;

&lt;p&gt;To get the replication working, the Standby cluster would need to authenticate with the Main one. To get there, both clusters must have certificates signed by the same certificate authority (CA). Default replication user &lt;code&gt;_crunchyrepl&lt;/code&gt; will be used.&lt;/p&gt;

&lt;p&gt;In the simplest case you can copy the certificates from the Main cluster. You need to look out for two files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;main-cluster-cert&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;main-replication-cert&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Copy them to the namespace where DR cluster is going to be running and reference under &lt;code&gt;spec.secrets&lt;/code&gt; (I renamed them replacing &lt;code&gt;main&lt;/code&gt; with &lt;code&gt;dr&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  secrets:
    customTLSSecret:
      name: dr-cluster-cert
    customReplicationTLSSecret:
      name: dr-replication-cert
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are generating your own certificates, just remember the following rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Certificates for both Main and Standby clusters must be signed by the same CA&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;customReplicationTLSSecret&lt;/code&gt; must have a Common Name (CN) setting that matches &lt;code&gt;_crunchyrepl&lt;/code&gt;, which is a default replication user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Read more about certificates in &lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/TLS.html" rel="noopener noreferrer"&gt;the documentation&lt;/a&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;Apart from setting certificates correctly, you should also set standby configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  standby:
    enabled: true
    host: main-ha.main-pg.svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;standby.enabled&lt;/code&gt; controls if it is a standby cluster or not&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;standby.host&lt;/code&gt; must point to the primary node of a Main cluster. In my case it is a &lt;code&gt;main-ha&lt;/code&gt; service in another namespace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Deploy the DR cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f dr-cr.yaml -n dr-pg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verify
&lt;/h2&gt;

&lt;p&gt;Once both clusters are up, you can verify that replication is working.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Insert some data into Main cluster&lt;/li&gt;
&lt;li&gt;Connect to the DR cluster&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To connect to the DR cluster, use the credentials that you used to connect to Main. This also verifies that the connection is working. You should see whatever data you have in the Main cluster in the Disaster Recovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Disaster recovery is crucial for maintaining business continuity in today's data-driven environment. Implementing a robust disaster recovery strategy for multi-cloud or multi-regional PostgreSQL deployments can be complex. However, the &lt;a href="https://www.percona.com/postgresql/software/percona-operator-for-postgresql" rel="noopener noreferrer"&gt;Percona Operator for PostgreSQL&lt;/a&gt; simplifies this process by enabling seamless management of PostgreSQL clusters on Kubernetes. By following the steps outlined in this article, you can set up disaster recovery using Percona Operator and streaming replication, ensuring your critical data remains secure and accessible. This approach not only provides peace of mind but also safeguards against significant business disruptions and financial losses. &lt;/p&gt;

&lt;p&gt;At Percona, we are aiming to provide the best open source databases and tooling possible. As the next level of simplification and user experience for databases on Kubernetes, we recently released Percona Everest (currently in Beta). It is a cloud native database platform with a slick UI. It deploys and manages databases on Kubernetes for you without the need to look into YAML manifests. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.percona.com/percona-operator-for-postgresql/2.0/index.html" rel="noopener noreferrer"&gt;Try Percona Operator for PostgreSQL&lt;/a&gt;&lt;/strong&gt; | &lt;strong&gt;&lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Try Percona Everest&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>cloudnative</category>
      <category>kubernetes</category>
      <category>operators</category>
    </item>
    <item>
      <title>Deploy MongoDB on Kubernetes with ArgoCD and Helm charts</title>
      <dc:creator>Sergey Pronin</dc:creator>
      <pubDate>Tue, 23 Apr 2024 09:17:45 +0000</pubDate>
      <link>https://dev.to/spronin/deploy-mongodb-on-kubernetes-with-argocd-and-helm-charts-5dg</link>
      <guid>https://dev.to/spronin/deploy-mongodb-on-kubernetes-with-argocd-and-helm-charts-5dg</guid>
      <description>&lt;p&gt;According to &lt;a href="https://www.cncf.io/wp-content/uploads/2023/11/CNCF_GitOps-Microsurvey_Final.pdf" rel="noopener noreferrer"&gt;CNCF GitOps Microsurvey&lt;/a&gt;, adoption of GitOps among Kubernetes practitioners is growing and ArgoCD and Flux are the most widely used CNCF GitOps projects. The desire to simplify application deployment and management in Kubernetes also makes sense, as complexity in Cloud Native space is the most common challenge. &lt;/p&gt;

&lt;p&gt;Databases on Kubernetes are complex as well, but Operators take away toil from administrators and developers. In this blog post we will simplify day-1 and day-2 operations for Databases on Kubernetes with a synergetic mix of ArgoCD and Operators, using &lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt; as an example.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a strategy
&lt;/h2&gt;

&lt;p&gt;ArgoCD allows the use of both regular YAML manifests and Helm charts. In conversations with our users, we see the tendency to use helm charts when those are available. Helm charts are a universal way to deploy and manage applications on Kubernetes, as a result easier onboarding.&lt;/p&gt;

&lt;p&gt;YAMLs though can enable better flexibility, as it gives a direct exposure to manifests. &lt;/p&gt;

&lt;p&gt;In this article we provide an example for using Helm charts. Read &lt;a href="https://www.percona.com/blog/deploy-postgresql-on-kubernetes-using-gitops-and-argocd/" rel="noopener noreferrer"&gt;Deploy PostgreSQL on Kubernetes using Gitops and ArgoCD&lt;/a&gt; to get an idea on how to deal with the YAML manifests case, it can be easily adapted for Percona Operator for MongoDB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare the environment
&lt;/h2&gt;

&lt;p&gt;For starters, you need the following:&lt;br&gt;
Kubernetes cluster&lt;br&gt;
Helm chart. In this article we use the official &lt;a href="https://github.com/percona/percona-helm-charts" rel="noopener noreferrer"&gt;Percona helm charts&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy and configure ArgoCD
&lt;/h3&gt;

&lt;p&gt;ArgoCD has &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;detailed documentation&lt;/a&gt; explaining the installation process. I did the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Action
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiupdfshdumd8aon734d3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiupdfshdumd8aon734d3.png" alt="ArgoCD with Percona Operator for MongoDB" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The flow is the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User creates the Argo Application resources&lt;/li&gt;
&lt;li&gt;ArgoCD syncs with the helm repository&lt;/li&gt;
&lt;li&gt;ArgoCD provisions the resources based on ArgoCD application definitions and syncs with the Helm repository&lt;/li&gt;
&lt;li&gt;User connects to the database once it is provisioned&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One of the key parts to grasp is that ArgoCD uses helm repository as a source of truth, but it does not deploy helm directly in the Kubernetes cluster. So in the helm list output you will not see anything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Resource Definitions (CRDs)
&lt;/h3&gt;

&lt;p&gt;CRDs allow users to extend the Kubernetes API. Operators process Custom Resources (CRs) that users create and as a result deploy and manage applications. Custom Resource Definitions must be created before users start creating CRs. &lt;/p&gt;

&lt;p&gt;ArgoCD provides a way to sequence the provisioning of the resources, but only within a single application. In the case of a helm chart, we have two different Argo applications. &lt;/p&gt;

&lt;p&gt;The best practice here is to ensure that the application that has CRDs is provisioned first. Usual approach is to have role separation. Where Kubernetes cluster administrators provision CRDs and other roles, like developers or DBAs, only focus on database components. &lt;/p&gt;

&lt;p&gt;Application to deploy CRDs - &lt;a href="https://github.com/spron-in/blog-data/blob/master/gitops-mongo/argo_psmdb_operator.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;argo_psmdb_operator.yaml&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: psmdb-operator
  namespace: argocd
spec:
  project: default
  syncPolicy:
    automated: {}
    syncOptions:
    - ServerSideApply=true
  source:
    chart: psmdb-operator
    path: charts/psmdb-operator/
    repoURL: https://percona.github.io/percona-helm-charts/
    targetRevision: 1.15.4
    helm:
      releaseName: psmdb-operator
  destination:
    server: "https://kubernetes.default.svc"
    namespace: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some things to note:&lt;br&gt;
&lt;code&gt;- ServerSideApply=true&lt;/code&gt; - is a must, otherwise CRD creation will fail with the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time="2024-04-17T14:58:31Z" level=info msg="Updating operation state. phase: Running -&amp;gt; Running, message: 'one or more objects failed to apply, reason: CustomResourceDefinition.apiextensions.k8s.io \"perconaservermongodbs.psmdb.percona.com\" is invalid: metadata.annotations: Too long: must have at most 262144 bytes. Retrying attempt #5 at 2:57PM.' -&amp;gt; 'one or more tasks are running'" application=argocd/psmdb-operator syncId=00006-HyBZr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We enable automated sync in the example, this might not be desired for your use case. Read more about synchronization in the &lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/auto_sync/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Check if application is synchronized:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% kubectl -n argocd get application psmdb-operator
NAME             SYNC STATUS   HEALTH STATUS
psmdb-operator   Synced        Healthy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once it is synced, you should be able to see the Operator Pod and CRDs deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy the Percona Server for MongoDB cluster
&lt;/h3&gt;

&lt;p&gt;There is no difference between deploying CRDs with Operator and the Custom Resource for database cluster. We use another helm chart and also add extra parameters as an example - &lt;a href="https://github.com/spron-in/blog-data/blob/master/gitops-mongo/argo_psmdb_db.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;argo_psmdb_db.yaml&lt;/code&gt;&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: psmdb-db
  namespace: argocd
spec:
  project: default
  syncPolicy:
    automated: {}
  source:
    chart: psmdb-db
    path: charts/psmdb-db/
    repoURL: https://percona.github.io/percona-helm-charts/
    targetRevision: 1.15.3
    helm:
      releaseName: psmdb-db
      valuesObject:
      sharding:
          enabled: false
      ...
  destination:
    server: "https://kubernetes.default.svc"
    namespace: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attention to &lt;code&gt;valuesObjects&lt;/code&gt; section, where we provide some examples of passing &lt;a href="https://github.com/percona/percona-helm-charts/blob/main/charts/psmdb-db/values.yaml" rel="noopener noreferrer"&gt;&lt;code&gt;values.yaml&lt;/code&gt;&lt;/a&gt; variables. There are three ways how it can be done:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;valuesObjects - the one in our example&lt;/li&gt;
&lt;li&gt;values:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source:
  helm:
    values: |
      sharding:
        enabled: false
      ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;values files:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source:
  helm:
    valueFiles:
    - myvalues.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do everything correctly, you would see the application synchronized and Custom Resource deployed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;% kubectl -n argocd get application psmdb-db
NAME       SYNC STATUS   HEALTH STATUS
psmdb-db   Synced        Healthy


% kubectl get psmdb
NAME       ENDPOINT   STATUS         AGE
psmdb-db   ...        ready          73s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Deploying the &lt;a href="https://github.com/percona/percona-server-mongodb-operator" rel="noopener noreferrer"&gt;Percona Operator for MongoDB&lt;/a&gt; using ArgoCD represents a significant step forward in managing complex database systems within Kubernetes. By integrating Helm charts with GitOps principles, organizations can achieve more consistent, scalable, and error-resistant deployments. &lt;/p&gt;

&lt;p&gt;As the cloud-native ecosystem continues to evolve, tools like ArgoCD and Kubernetes Operators will play increasingly critical roles. Their ability to reduce manual overhead and streamline operations is crucial in an environment where agility and reliability are paramount. We encourage practitioners to explore these technologies, experiment with the configurations discussed, and contribute back to the community with any findings or improvements.&lt;/p&gt;

&lt;p&gt;Percona is committed to provide open source solutions to deploy and manage databases anywhere. Try out our operators or check the Beta version of an open source cloud native database platform - &lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Percona Everest&lt;/a&gt;. It provides a nice GUI and API to deploy databases on Kubernetes. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.percona.com/percona-operator-for-mongodb/index.html" rel="noopener noreferrer"&gt;100% open source Operator for MongoDB&lt;/a&gt; | &lt;a href="https://docs.percona.com/everest/index.html" rel="noopener noreferrer"&gt;Deploy MongoDB with Percona Everest&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>database</category>
      <category>cloudnative</category>
    </item>
  </channel>
</rss>
