<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Srinivasa Vasu</title>
    <description>The latest articles on DEV Community by Srinivasa Vasu (@humourmind).</description>
    <link>https://dev.to/humourmind</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/humourmind"/>
    <language>en</language>
    <item>
      <title>Better Developer Experience: Getting Started with YugabyteDB on Gitpod</title>
      <dc:creator>Srinivasa Vasu</dc:creator>
      <pubDate>Thu, 24 Feb 2022 23:34:57 +0000</pubDate>
      <link>https://dev.to/yugabyte/better-developer-experience-getting-started-with-yugabytedb-on-gitpod-4nf1</link>
      <guid>https://dev.to/yugabyte/better-developer-experience-getting-started-with-yugabytedb-on-gitpod-4nf1</guid>
      <description>&lt;p&gt;Developer onboarding and experience gets simplified every day. But while developers use many modern software development techniques—including 12-factor, cloud native, and continuous integration—developer onboarding remains a challenge. Therefore, what developers need is an integrated, self-contained platform that helps them get started with ease. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitpod.io/" rel="noopener noreferrer"&gt;Gitpod&lt;/a&gt; is one such way we can steer into that problem space to create a better developer experience. More specifically, Gitpod provides git-based, fully automated, integrated cloud-native development workflows with the prerequisites configured. This “GitDev” approach offers a preconfigured environment that seamlessly provides a consistent development environment for stream-aligned teams.&lt;/p&gt;

&lt;p&gt;In this post, we’ll explore how to get started with &lt;a href="https://www.yugabyte.com/yugabytedb/" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; in a Gitpod-driven workspace covering the following workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a single-node instance&lt;/li&gt;
&lt;li&gt;Creating a cluster with multiple nodes&lt;/li&gt;
&lt;li&gt;Customizing the cluster configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  First steps
&lt;/h2&gt;

&lt;p&gt;Gitpod is a configurable, ready-to-code cloud development environment accessible via a browser. A Gitpod workspace includes everything we need to develop, such as the Visual Studio Code editor, common languages, tools, and utilities. Instantly, this sets up a cloud-hosted, containerized, and customizable editing environment that is ready to go.&lt;/p&gt;

&lt;p&gt;Gitpod doesn’t require anything other than a code editor and Git CLI on your local computer. Therefore, much of the development happens in the cloud through a web browser. But you can refer to the &lt;a href="https://www.gitpod.io/docs/quickstart" rel="noopener noreferrer"&gt;QuickStart&lt;/a&gt; section to get started with Gitpod. &lt;/p&gt;

&lt;p&gt;We’ll use GitHub as our source code repository. For starters, create a new empty repository &lt;strong&gt;yb-git-pod&lt;/strong&gt; and clone that to the local workstation.&lt;/p&gt;

&lt;p&gt;| &lt;strong&gt;git clone &lt;a href="https://github.com/%5Buser%5D/yb-git-pod.git" rel="noopener noreferrer"&gt;https://github.com/[user]/yb-git-pod.git&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can find the complete code in this &lt;a href="https://github.com/srinivasa-vasu/yb-git-pod" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the base image
&lt;/h2&gt;

&lt;p&gt;Let’s make a new file— &lt;strong&gt;.gitpod.Dockerfile&lt;/strong&gt; —inside the &lt;em&gt;yb-git-pod&lt;/em&gt; directory we cloned to the local workstation. The content of the YugabyteDB Dockerfile is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.02.36-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.02.36-PM.png" alt="Creating the base image with YugabyteDB on Gitpod" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a single-node instance
&lt;/h2&gt;

&lt;p&gt;Next, let’s make a new file: &lt;strong&gt;.gitpod.yml&lt;/strong&gt;. The content of this file is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.05.37-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.05.37-PM.png" alt="Creating a single-node instance with YugabyteDB on Gitpod" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, commit .gitpod.Dockerfile and .gitpod.yml to your GitHub repository. To initialize the Gitpod workspace, launch &lt;a href="https://gitpod.io/#%5BREPO_URL%5D" rel="noopener noreferrer"&gt;https://gitpod.io/#[REPO_URL]&lt;/a&gt; in a browser window. Replace [REPO_URL] with your repository URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.06.57-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.06.57-PM.png" alt="Launcing a Gitpod workspace." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Gitpod workspace gets created based on the definition of the .gitpod.yml file. The &lt;strong&gt;“yugabyted start”&lt;/strong&gt; statement can be part of the “before” or “command” task. These two tasks get executed during the initial workspace creation and re-initialization phases. YugabyteDB starts during the initialization phase, and upon completion, a terminal launches with the ysql shell prompt.&lt;/p&gt;

&lt;p&gt;Gitpod manages the exposed ports in the Dockerfile definition. Specifically, it automatically creates the port-forwarding rules. We will get links to access the Web-UI right from this same interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.08.02-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.08.02-PM.png" alt="Launching a Gitpod workspace." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a cluster with multiple nodes
&lt;/h2&gt;

&lt;p&gt;We need to update .gitpod.yml with the following multi-node configuration. The complete multi-node configuration spec is available in the .gitpod-cluster.yml file in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.10.30-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.10.30-PM.png" alt="Creating a cluster with multiple nodes." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, commit .gitpod.yml with this updated configuration to your GitHub repository and re-initialize the Gitpod workspace &lt;a href="https://gitpod.io/#%5BREPO_URL%5D" rel="noopener noreferrer"&gt;https://gitpod.io/#[REPO_URL]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Specifically, this configuration will initialize a three-node cluster based on the loopback interface configuration. The node initialization is sequenced properly using &lt;em&gt;gp sync-done&lt;/em&gt; and &lt;em&gt;gp sync-await&lt;/em&gt; calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Customizing the cluster configuration
&lt;/h2&gt;

&lt;p&gt;The following config builds on top of the previous cluster configuration, providing &lt;a href="https://docs.yugabyte.com/latest/yugabyte-platform/manage-deployments/edit-config-flags/" rel="noopener noreferrer"&gt;GFlags&lt;/a&gt; and cluster-configuration customization. However, we have configured the cluster with custom placement info in the below snippet. The complete multi-node custom configuration spec is available in the .gitpod-cluster-config.yml file in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.24.26-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.24.26-PM.png" alt="Customizing the cluster configuration." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, commit .gitpod.yml with this updated configuration to your GitHub repository and re-initialize the Gitpod workspace &lt;a href="https://gitpod.io/#%5BREPO_URL%5D" rel="noopener noreferrer"&gt;https://gitpod.io/#[REPO_URL]&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Web-UI console now reflects the custom configuration changes, as illustrated below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.19.13-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FScreen-Shot-2022-02-24-at-4.19.13-PM.png" alt="The Gitpod Web-UI console." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Additionally, you can check out &lt;a href="https://docs.yugabyte.com/latest/develop/gitdev/" rel="noopener noreferrer"&gt;this article&lt;/a&gt; to learn more about integrating microservices with a Gitpod-powered YugabyteDB workspace to improve developer experience. Specifically, we have a pull request submitted for a dedicated YugabyteDB base image for Gitpod. &lt;/p&gt;

&lt;p&gt;Finally, if interested, you can follow &lt;a href="https://github.com/gitpod-io/workspace-images/pull/604" rel="noopener noreferrer"&gt;this pull request&lt;/a&gt; to track our progress. Either way, we hope you give this short tutorial a shot.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Any questions? Let us know what you think in the &lt;a href="https://www.yugabyte.com/community/" rel="noopener noreferrer"&gt;YugabyteDB Community Slack channel&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.yugabyte.com/better-developer-experience-yugabytedb-gitpod/" rel="noopener noreferrer"&gt;Better Developer Experience: Getting Started with YugabyteDB on Gitpod&lt;/a&gt; appeared first on &lt;a href="https://blog.yugabyte.com" rel="noopener noreferrer"&gt;The Distributed SQL Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gitpod</category>
      <category>developer</category>
      <category>distributedsql</category>
      <category>yugabytedb</category>
    </item>
    <item>
      <title>Cloud Native Java: Integrating YugabyteDB with Spring Boot, Quarkus, and Micronaut</title>
      <dc:creator>Srinivasa Vasu</dc:creator>
      <pubDate>Wed, 16 Feb 2022 15:49:00 +0000</pubDate>
      <link>https://dev.to/yugabyte/cloud-native-java-integrating-yugabytedb-with-spring-boot-quarkus-and-micronaut-3e0d</link>
      <guid>https://dev.to/yugabyte/cloud-native-java-integrating-yugabytedb-with-spring-boot-quarkus-and-micronaut-3e0d</guid>
      <description>&lt;p&gt;Java is the quintessential language runtime for enterprise applications built on monoliths, microservices, and modular architecture patterns. But when it comes to “Enterprise Java,” &lt;a href="https://spring.io/" rel="noopener noreferrer"&gt;Spring&lt;/a&gt; is the de facto framework of choice.&lt;/p&gt;

&lt;p&gt;The Spring ecosystem—with the simplicity of &lt;a href="https://spring.io/projects/spring-boot" rel="noopener noreferrer"&gt;Spring Boot&lt;/a&gt;—has grown to provide integration touchpoints to a majority of the Java ecosystem. For starters, it offers a clean abstraction and “glue” code to build cohesive enterprise applications. However, as the ecosystem evolves, two newer frameworks are growing in popularity: &lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;Quarkus&lt;/a&gt; and &lt;a href="https://micronaut.io/" rel="noopener noreferrer"&gt;Micronaut&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Spring Boot, Quarkus, and Micronaut are seeing massive adoption for cloud native greenfield applications, as well as brownfield modernization efforts. In this blog post, we look at how YugabyteDB’s &lt;a href="https://docs.yugabyte.com/latest/integrations/jdbc-driver/" rel="noopener noreferrer"&gt;YSQL smart driver&lt;/a&gt; integrates with all three popular microservices frameworks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FYugabyteDB-Integration-Cloud-Native-Java-Frameworks-Image-300x264.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F02%2FYugabyteDB-Integration-Cloud-Native-Java-Frameworks-Image-300x264.png" alt="YugabyteDB-Integration-Cloud-Native-Java-Frameworks-Image" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Follow the &lt;a href="https://docs.yugabyte.com/latest/quick-start/" rel="noopener noreferrer"&gt;YB Quickstart&lt;/a&gt; instructions to run a local YugabyteDB cluster. Test YugabyteDB’s &lt;a href="https://docs.yugabyte.com/latest/quick-start/explore/ysql/" rel="noopener noreferrer"&gt;YSQL API&lt;/a&gt; to confirm you have a YSQL service running on “localhost:5433”.&lt;/li&gt;
&lt;li&gt;You will need JDK 11 or above. You can use &lt;a href="https://sdkman.io/install" rel="noopener noreferrer"&gt;SDKMAN&lt;/a&gt; to install the JDK runtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;You can find the complete source code in this &lt;a href="https://github.com/yugabyte/yb-ms-data" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;. This project has directories for all three frameworks: spring-boot, quarkus, and micronaut. Clone this repository to a local workstation and open the “yb-ms-data” directory in your favorite IDE to easily navigate and explore framework-specific code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/yugabyte/yb-ms-data.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This source repository consists of a simple JPA-based web application with CRUD functionality. We will focus in this blog only on the database integration point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spring Boot
&lt;/h2&gt;

&lt;p&gt;Spring Boot makes it easy to create stand-alone, production-grade Spring-based Applications that you can “just run.”&lt;/p&gt;

&lt;p&gt;The following section describes how to build a simple JPA-based web application with the Spring Boot framework for YSQL API using the &lt;a href="https://docs.yugabyte.com/latest/integrations/jdbc-driver/" rel="noopener noreferrer"&gt;YugabyteDB JDBC Driver&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For starters, navigate to the &lt;strong&gt;&lt;code&gt;springboot&lt;/code&gt;&lt;/strong&gt; framework folder inside the project &lt;strong&gt;&lt;code&gt;yb-ms-data&lt;/code&gt;&lt;/strong&gt; directory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;cd yb-ms-data/springboot&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Dependencies
&lt;/h3&gt;

&lt;p&gt;This project depends on the following libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;implementation("org.springframework.boot:spring-boot-starter-web")
implementation("org.springframework.boot:spring-boot-starter-actuator")
implementation("org.springframework.boot:spring-boot-starter-data-jpa")
implementation("org.flywaydb:flyway-core")
implementation("org.springdoc:springdoc-openapi-ui:1.5.9")
implementation("com.yugabyte:spring-data-yugabytedb-ysql:2.3.0") {
   exclude(module = "jdbc-yugabytedb")
}
implementation("org.springframework.retry:spring-retry")
annotationProcessor("org.springframework.boot:spring-boot-configuration-processor")
annotationProcessor("org.projectlombok:lombok")
developmentOnly("org.springframework.boot:spring-boot-devtools")
compileOnly("org.projectlombok:lombok")
runtimeOnly("io.micrometer:micrometer-registry-prometheus")
implementation("com.yugabyte:jdbc-yugabytedb:42.3.3")
testImplementation("org.springframework.boot:spring-boot-starter-test")
testImplementation("org.flywaydb.flyway-test-extensions:flyway-spring-test:7.0.0")
testImplementation("com.yugabyte:testcontainers-yugabytedb:1.0.0-beta-4")
testImplementation("org.testcontainers:junit-jupiter:1.15.3")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the driver dependency library &lt;em&gt;“com.yugabyte:jdbc-yugabytedb:42.3.3”&lt;/em&gt; to the latest version. Grab the latest version from &lt;a href="https://docs.yugabyte.com/latest/integrations/jdbc-driver/" rel="noopener noreferrer"&gt;YugabyteDB JDBC Driver&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Driver configuration
&lt;/h3&gt;

&lt;p&gt;Refer to the file &lt;strong&gt;&lt;code&gt;yb-ms-data/springboot/src/main/resources/application.yaml&lt;/code&gt;&lt;/strong&gt; in the project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring:
  jpa:
    properties:
      hibernate:
        connection:
          provider_disables_autocommit: true
        default_schema: todo
    open-in-view: false
  datasource:
    url: jdbc:yugabytedb://[hostname:port]/yugabyte?load-balance=true
    username: yugabyte
    password: yugabyte
    driver-class-name: com.yugabyte.Driver
    hikari:
      minimum-idle: 5
      maximum-pool-size: 20
      auto-commit: false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;url&lt;/code&gt;&lt;/strong&gt; is the JDBC connection string. You can set YugabyteDB driver-specific properties such as “load-balance” and “topology-keys” as part of this string.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;driver-class-name&lt;/code&gt;&lt;/strong&gt; is the JDBC driver class name.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Update the JDBC URL with the appropriate “hostname” and “port” number details &lt;strong&gt;&lt;code&gt;“jdbc:yugabytedb://[hostname: port]/yugabyte”&lt;/code&gt;&lt;/strong&gt; in the application.yaml file. Remember to remove the square brackets. It’s a placeholder to indicate the fields that need user inputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and run the application
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Navigate to the springboot folder:
&amp;gt; cd yb-ms-data/springboot
To build the application:
&amp;gt; gradle build
To run and test the application:
&amp;gt; gradle bootRun
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Quarkus
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://quarkus.io/" rel="noopener noreferrer"&gt;Quarkus&lt;/a&gt; is a Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM. It’s crafted from best-of-breed Java libraries and standards.&lt;/p&gt;

&lt;p&gt;This section describes how to build a simple JPA-based web application with the Quarkus framework for YSQL API using the YugabyteDB JDBC Driver.&lt;/p&gt;

&lt;p&gt;For starters, navigate to the &lt;strong&gt;&lt;code&gt;quarkus&lt;/code&gt;&lt;/strong&gt; framework folder inside the project &lt;strong&gt;&lt;code&gt;yb-ms-data&lt;/code&gt;&lt;/strong&gt; directory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;cd yb-ms-data/quarkus&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Dependencies
&lt;/h3&gt;

&lt;p&gt;This project depends on the following libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;implementation("io.quarkus:quarkus-hibernate-orm")
implementation("io.quarkus:quarkus-flyway")
implementation("io.quarkus:quarkus-resteasy")
implementation("io.quarkus:quarkus-resteasy-jackson")
implementation("io.quarkus:quarkus-config-yaml")
implementation("io.quarkus:quarkus-agroal")
implementation("io.quarkus:quarkus-smallrye-fault-tolerance")
implementation("com.yugabyte:jdbc-yugabytedb:42.3.3")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the driver dependency library &lt;em&gt;“com.yugabyte:jdbc-yugabytedb:42.3.3”&lt;/em&gt; to the latest version. Grab the latest version from &lt;a href="https://docs.yugabyte.com/latest/integrations/jdbc-driver/" rel="noopener noreferrer"&gt;YugabyteDB JDBC Driver&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Driver configuration
&lt;/h3&gt;

&lt;p&gt;Refer to the file &lt;strong&gt;&lt;code&gt;yb-ms-data/quarkus/src/main/resources/application.yaml&lt;/code&gt;&lt;/strong&gt; in the project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quarkus:
  datasource:
    db-kind: pgsql
    jdbc:
      url: jdbc:yugabytedb://[hostname:port]/yugabyte
      driver: com.yugabyte.Driver
      initial-size: 5
      max-size: 20
      additional-jdbc-properties:
        load-balance: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;db-kind&lt;/code&gt;&lt;/strong&gt; indicates the type of db instance. The value can be &lt;em&gt;pqsql&lt;/em&gt; or &lt;em&gt;postgresql&lt;/em&gt; for PostgreSQL or PostgreSQL API-compatible instances. You can have either of these values as YugabyteDB is PostgreSQL-compliant and reuses the PostgreSQL query layer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;url&lt;/code&gt;&lt;/strong&gt; is the JDBC connection string.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;driver&lt;/code&gt;&lt;/strong&gt; is the JDBC driver class name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;additional-jdbc-properties&lt;/code&gt;&lt;/strong&gt; is where YugabyteDB driver-specific properties such as “load-balance” and “topology-keys” can be set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, update the JDBC URL with the appropriate “hostname” and “port” number details &lt;strong&gt;&lt;code&gt;“jdbc:yugabytedb://[hostname: port]/yugabyte”&lt;/code&gt;&lt;/strong&gt; in the application.yaml file. Remember to remove the square brackets. It is a placeholder to indicate the fields that need user inputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and run the application
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Navigate to the quarkus folder:
&amp;gt; cd yb-ms-data/quarkus
To build the application:
&amp;gt; gradle quarkusBuild
To run and test the application:
&amp;gt; gradle quarkusDev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Micronaut
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://micronaut.io/" rel="noopener noreferrer"&gt;Micronaut&lt;/a&gt; is a modern, JVM-based, full-stack framework for building modular, easily-testable microservice and serverless applications.&lt;/p&gt;

&lt;p&gt;This section describes how to build a simple JPA-based web application with the Micronaut framework for YSQL API using the YugabyteDB JDBC Driver.&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;strong&gt;&lt;code&gt;micronaut&lt;/code&gt;&lt;/strong&gt; framework folder inside the project &lt;strong&gt;&lt;code&gt;yb-ms-data&lt;/code&gt;&lt;/strong&gt; directory.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;cd yb-ms-data/micronaut&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Dependencies
&lt;/h3&gt;

&lt;p&gt;This project depends on the following libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;annotationProcessor("io.micronaut:micronaut-http-validation")
annotationProcessor("io.micronaut.data:micronaut-data-processor")
annotationProcessor("io.micronaut.openapi:micronaut-openapi")
implementation("io.micronaut:micronaut-http-client")
implementation("io.micronaut:micronaut-management")
implementation("io.micronaut:micronaut-runtime")
implementation("io.micronaut.data:micronaut-data-hibernate-jpa")
implementation("io.micronaut.flyway:micronaut-flyway")
implementation("io.micronaut.sql:micronaut-jdbc-hikari")
implementation("io.swagger.core.v3:swagger-annotations")
implementation("javax.annotation:javax.annotation-api")
runtimeOnly("ch.qos.logback:logback-classic")
implementation("io.micronaut:micronaut-validation")
implementation("com.yugabyte:jdbc-yugabytedb:42.3.3")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update the driver dependency library &lt;em&gt;“com.yugabyte:jdbc-yugabytedb:42.3.3”&lt;/em&gt; to the latest version. Grab the latest version from &lt;a href="https://docs.yugabyte.com/latest/integrations/jdbc-driver/" rel="noopener noreferrer"&gt;YugabyteDB JDBC driver&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Driver Configuration
&lt;/h3&gt;

&lt;p&gt;Refer to the file &lt;strong&gt;&lt;code&gt;yb-ms-data/micronaut/src/main/resources/application.yaml&lt;/code&gt;&lt;/strong&gt; in the project directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datasources:
  default:
    url: jdbc:yugabytedb://[hostname:port]/yugabyte
    driverClassName: com.yugabyte.Driver
    data-source-properties:
      load-balance: true
      currentSchema: todo
    username: yugabyte
    password: yugabyte
    minimum-idle: 5
    maximum-pool-size: 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;url&lt;/code&gt;&lt;/strong&gt; is the JDBC connection string.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;driverClassName&lt;/code&gt;&lt;/strong&gt; is the JDBC driver class name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;data-source-properties&lt;/code&gt;&lt;/strong&gt; is where YugabyteDB driver-specific properties such as “load-balance” and “topology-keys” can be set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Update the JDBC URL with the appropriate “hostname” and “port” number details &lt;strong&gt;&lt;code&gt;“jdbc:yugabytedb://[hostname: port]/yugabyte”&lt;/code&gt;&lt;/strong&gt; in the application.yaml file. Remember to remove the square brackets. It is a placeholder to indicate the fields that need user inputs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and run the application
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Navigate to the micronaut folder:
&amp;gt; cd yb-ms-data/micronaut
To build the application:
&amp;gt; gradle build
To run and test the application:
&amp;gt; gradle run
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This post looked at how we can quickly get started with popular cloud native java frameworks using the YugabyteDB JDBC driver without any application-level modifications.&lt;/p&gt;

&lt;p&gt;YSQL’s API compatibility with PostgreSQL accelerates developer productivity and onboarding. By integrating with the existing ecosystem, YugabyteDB ensures developers can quickly start using a language they already know and love. &lt;/p&gt;

&lt;p&gt;The YugabyteDB JDBC driver is a distributed driver built on the PostgreSQL driver. Although the upstream PostgreSQL JDBC driver works with YugabyteDB, the YugabyteDB driver enhances it by providing additional features such as cluster and topology awareness. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you haven’t already, check out our&lt;/em&gt; &lt;a href="https://docs.yugabyte.com/latest/" rel="noopener noreferrer"&gt;&lt;em&gt;Docs&lt;/em&gt;&lt;/a&gt; &lt;em&gt;site to learn more about YugabyteDB. Any questions? Ask them in the&lt;/em&gt; &lt;a href="https://communityinviter.com/apps/yugabyte-db/register" rel="noopener noreferrer"&gt;&lt;em&gt;YugabyteDB community Slack channel&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.yugabyte.com/integrating-yugabytedb-with-spring-boot-quarkus-micronaut/" rel="noopener noreferrer"&gt;Cloud Native Java: Integrating YugabyteDB with Spring Boot, Quarkus, and Micronaut&lt;/a&gt; appeared first on &lt;a href="https://blog.yugabyte.com" rel="noopener noreferrer"&gt;The Distributed SQL Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>springboot</category>
      <category>quarkus</category>
      <category>micronaut</category>
    </item>
    <item>
      <title>Tutorial: How to Deploy Multi-Region YugabyteDB on GKE Using Multi-Cluster Services</title>
      <dc:creator>Srinivasa Vasu</dc:creator>
      <pubDate>Tue, 18 Jan 2022 15:32:16 +0000</pubDate>
      <link>https://dev.to/yugabyte/tutorial-how-to-deploy-multi-region-yugabytedb-on-gke-using-multi-cluster-services-1o8e</link>
      <guid>https://dev.to/yugabyte/tutorial-how-to-deploy-multi-region-yugabytedb-on-gke-using-multi-cluster-services-1o8e</guid>
      <description>&lt;p&gt;The evolution of “build once, run anywhere” &lt;a href="https://www.docker.com/resources/what-container" rel="noopener noreferrer"&gt;containers&lt;/a&gt; and &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;—a cloud-agnostic, declarative-driven orchestration API—have made a scalable, self-service platform layer a reality. Even though it is not a one size fits all solution, a majority of business and technical challenges are being addressed. Kubernetes as the common denominator gives scalability, resiliency, and agility to internet-scale applications on various clouds in a predictable, consistent manner. But what good is application layer scalability if the data is still confined to a single vertically scalable server that can’t exceed a predefined limit?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.yugabyte.com/yugabytedb/" rel="noopener noreferrer"&gt;YugabyteDB&lt;/a&gt; addresses these challenges. It is an open source,  distributed SQL database built for cloud native architecture. YugabyteDB can handle global, internet-scale applications with &lt;a href="https://blog.yugabyte.com/how-to-achieve-high-availability-low-latency-gdpr-compliance-in-a-distributed-sql-database/" rel="noopener noreferrer"&gt;low query latency&lt;/a&gt; and &lt;a href="https://docs.yugabyte.com/latest/architecture/core-functions/high-availability/" rel="noopener noreferrer"&gt;extreme resilience against failures&lt;/a&gt;. It also offers the same level of internet-scale similar to Kubernetes but for data on bare metal, virtual machines, and containers deployed on various clouds.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll explore a multi-region deployment of YugabyteDB on &lt;a href="https://cloud.google.com/kubernetes-engine" rel="noopener noreferrer"&gt;Google Kubernetes Engine (GKE)&lt;/a&gt; using &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud Platform’s&lt;/a&gt; (GCP) native multi-cluster discovery service (MCS). In a Kubernetes cluster, the “Service” object manifest facilitates service discovery and consumption only within the cluster. We need to rely on an off-platform, bespoke solution with Istio-like capabilities to discover services across clusters. But we can build and discover services that span across clusters natively with MCS. Below is an illustration of our multi-region deployment in action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2F0114-Multi-Region-YB-on-GKE-Diagram-01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2F0114-Multi-Region-YB-on-GKE-Diagram-01.png" alt="Single YugabyteDB cluster stretched across 3 GCP regions." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;p&gt;We will find commands with appropriate placeholders and substitutions throughout this blog based on the following assignments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.22.34-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.22.34-PM.png" alt="Multi-region deployment assumptions." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Ensure Cloud Readiness
&lt;/h2&gt;

&lt;p&gt;We need to enable a couple of GCP service APIs to use this feature. Let’s allow the following APIs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable Kubernetes Engine API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.26.44-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.26.44-PM.png" alt="Ensure cloud readiness by enabling the Kubernetes Engine API" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable GKE hub API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.30.08-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.30.08-PM.png" alt="Enable cloud readiness by enabling the GKE hub API" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable Cloud DNS API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.43.33-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.43.33-PM.png" alt="Ensure cloud readiness by enabling the Cloud DNS API" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable Traffic Director API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.46.57-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.46.57-PM.png" alt="Ensure cloud readiness by enabling the Traffic Director API." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enable Cloud Resource Manager API.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.51.50-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.51.50-PM.png" alt="Ensure cloud readiness by enabling the Cloud Resource Manager API." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition to these APIs, enable standard services such as Compute Engine and IAM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Create three GKE Clusters (VPC native)
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Create the first cluster in the &lt;strong&gt;US region&lt;/strong&gt; with workload identity enabled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.55.18-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.55.18-PM.png" alt="Create the first GKE cluster." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create the second cluster in the &lt;strong&gt;Europe region&lt;/strong&gt; with workload identity enabled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.57.53-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-4.57.53-PM.png" alt="Create the second GKE cluster in the Europe region." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create the third cluster in the &lt;strong&gt;Asia region&lt;/strong&gt; with workload identity enabled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-5.00.40-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-5.00.40-PM.png" alt="Create the third GKE cluster in the Asia region." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Validate the output of the multi-region cluster creation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-5.02.37-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-05-at-5.02.37-PM.png" alt="Validate the output of the multi-region GKE cluster creation." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Enable MCS Discovery
&lt;/h2&gt;

&lt;p&gt;Enable the MCS discovery API and Services:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.10.35-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.10.35-PM.png" alt="Enable the MCS discovery API and Services" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As workload identity has already been enabled for the clusters, we need to map the Kubernetes service account to impersonate GCP’s service account. This will allow applications running in the cluster to consume GCP services. IAM binding requires the following mapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.12.00-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.12.00-PM.png" alt="Mapping the Kubernetes service account to impersonate GCP’s service account." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Establish Hub Membership
&lt;/h2&gt;

&lt;p&gt;Upon successful registration, the Hub membership service will provision &lt;code&gt;gke-connect&lt;/code&gt; and &lt;code&gt;gke-mcs&lt;/code&gt; services to the cluster. These are CRDs and controllers to talk to GCP APIs in order to provision the appropriate cloud resources such as network endpoints, mapping rules, and others (as necessary). This service creates a private managed hosted zone and a traffic director mapping rule on successful membership enrollment. The managed zone “clusterset.local” is similar to “cluster.local” but advertises the services across clusters to auto-discover and consume.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To get started, register all three clusters:&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.15.16-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.15.16-PM.png" alt="Establish Hub membership by registering all three clusters." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After successful membership enrollment, the following objects will be created.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Managed private zone =&amp;gt; “clusterset.local”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.16.48-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.16.48-PM.png" alt="Enable Hub membership by creating the following objects." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;From there, the traffic director initializes with the following mapping rule.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.18.04-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.18.04-PM.png" alt="Enable Hub membership by initializing the traffic director." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once the traffic director initializes, verify the membership status.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.20.47-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.20.47-PM.png" alt="Verify the member status of the traffic director." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will also set up the appropriate firewall rules. When the service gets provisioned, the right network endpoints with the mapping rules would be automatically created, as illustrated below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2F0114-Multi-Region-YB-on-GKE-Diagram-02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2F0114-Multi-Region-YB-on-GKE-Diagram-02.png" alt="Three GKE clusters connect by MCS." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Initialize YugabyteDB
&lt;/h2&gt;

&lt;p&gt;As we have been exploring MCS, it’s clear our upstream &lt;a href="https://github.com/yugabyte/charts" rel="noopener noreferrer"&gt;Helm package&lt;/a&gt; won’t work out of the box. Let’s use the upstream chart to generate the template files locally and then make the relevant changes in the local copy. The template variable file for all three regions is available in the &lt;a href="https://gist.github.com/srinivasa-vasu/407019af2090c8b1bd60bd3ed93426d1" rel="noopener noreferrer"&gt;gist&lt;/a&gt; repo.&lt;/p&gt;

&lt;p&gt;Download all three region-specific variable files and an additional service-export.yaml from the remote repo to the local machine and name them as “ap-south1.yaml”, “eu-west1.yaml”, “us-central1.yaml”, and “service-export.yaml”.&lt;/p&gt;

&lt;p&gt;As the upstream chart is not updated for cross-cluster service discovery, the broadcast address of the master and tserver service instances would refer to the cluster local “svc.cluster.local” DNS entry. This needs to be updated explicitly with the new managed zone private domain that the hub created during cluster membership enrollment to let the instances communicate between clusters.&lt;/p&gt;

&lt;p&gt;To get started, generate Helm templates for all three regions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.23.15-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.23.15-PM.png" alt="Generate Helm templates for all three regions." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we generate the template files, we have to update the broadcast address. More specifically, search for this text &lt;strong&gt;“–server_broadcast_addresses”&lt;/strong&gt; in both the master and tserver StatefulSet manifest definition and update both the entries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.24.33-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.24.33-PM.png" alt="Update both entries in the master and tserver StatefulSet manifest definition." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pattern is &lt;strong&gt;[INSTANCE_NAME].[MEMBERSHIP].[SERVICE_NAME].[NAMESPACE].svc.clusterset.local.&lt;/strong&gt; This change is explained in the next section.&lt;/p&gt;

&lt;p&gt;Finally, get the container credentials of all three clusters. Once we get the kubeconfig, the local context would be similar to:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.28.37-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.28.37-PM.png" alt="Obtain the container credentials of all three clusters." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Deploy YugabyteDB
&lt;/h2&gt;

&lt;p&gt;Connect to all three cluster contexts one by one and execute the generated template file using kubectl CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.29.46-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.29.46-PM.png" alt="Connect to all three cluster contexts one by one and execute the generated template file using kubectl CLI." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s explore the &lt;strong&gt;service-export.yaml&lt;/strong&gt; file:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.31.36-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.31.36-PM.png" alt="Explore the service-export.yaml file." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“ServiceExport”&lt;/strong&gt; CRD is added by the MCS service. This will export services outside of the cluster to be discovered and consumed by other services. The controller programmed to react to events from this resource would interact with GCP services to create ‘A’ name records in the internal private managed zone for both yb-tservers and yb-masters headless services. If we verify the “clusterset.local” domain in the console, we would see the following records:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.32.35-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.32.35-PM.png" alt="Verify the “clusterset.local” domain in the console." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the exported headless services, there will be two “A” name records:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.33.44-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.33.44-PM.png" alt="There will be two “A” name records for the exported headless services." width="800" height="400"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
This is similar to how the in-cluster “svc.cluster.local” DNS works. The DNS creation and propagation would take some time (around 4-5 mins) for the first time. YugabyteDB wouldn’t be able to establish the quorum until the DNS records are made available. Once those entries get propagated, the cluster would be up and running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.35.20-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.35.20-PM.png" alt="An up-and-running YugabyteDB cluster." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we verify the Kubernetes cluster state for the master and tserver objects, we will find all of them in a healthy state. This is represented below using the &lt;a href="https://github.com/vmware-tanzu/octant" rel="noopener noreferrer"&gt;Octant&lt;/a&gt; dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.36.48-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.36.48-PM.png" alt="Verify the Kubernetes cluster state for the master and tserver objects." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.39.07-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.39.07-PM.png" alt="Verify the Kubernetes cluster state for the master and tserver objects." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can simulate the region failure by bringing down the StatefulSet replica to zero. Upon failure, the RAFT consensus group reacts and adjusts accordingly to bring the state to normalcy as we still have two surviving instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.39.46-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.39.46-PM.png" alt="Simulate the region failure by bringing down the StatefulSet replica to zero." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we scale up the replica count, the node joins back with the cluster group, and all three regions are again back to a healthy state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.40.57-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.40.57-PM.png" alt="The node joins back with the cluster group, and all three regions are again back to a healthy state." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Good usage pattern
&lt;/h3&gt;

&lt;p&gt;Because of the distributed nature of data and the associated secondary indexes in a multi-region deployment, it is beneficial to pin a region as the preferred region to host the tablet leaders. This keeps the network latencies to a minimum and is confined to a region for cross-node RPC calls such as multi-row transactions, secondary index lookups, and other similar operations. As you can see, this is one of the best usage patterns to improve network latencies in a multi-region deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Cleanup
&lt;/h2&gt;

&lt;p&gt;Delete YugabyteDB:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.42.41-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.42.41-PM.png" alt="Delete YugabyteDB." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unregister the Hub Membership:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.43.33-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.43.33-PM.png" alt="Unregister the Hub Membership." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Disable the MCS APIs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.44.34-PM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-10-at-3.44.34-PM.png" alt="Disable the MCS APIs." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, delete the GKE clusters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This blog post used GCP’s multi-cluster service discovery to deploy a highly available, fault-tolerant, and geo-distributed YugabyteDB cluster. As shown in the illustration below, a single YugabyteDB cluster distributed across three different regions addresses many use cases, such as region local delivery, geo-partitioning, higher availability, and resiliency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-18-at-10.06.21-AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.yugabyte.com%2Fwp-content%2Fuploads%2F2022%2F01%2FScreen-Shot-2022-01-18-at-10.06.21-AM.png" alt="A single YugabyteDB cluster distributed across three different regions addresses many use cases." width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Give this tutorial a try—and don’t hesitate to let us know what you think in the &lt;a href="https://www.yugabyte.com/community/" rel="noopener noreferrer"&gt;YugabyteDB Community Slack channel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://blog.yugabyte.com/multi-region-yugabytedb-on-gke/" rel="noopener noreferrer"&gt;Tutorial: How to Deploy Multi-Region YugabyteDB on GKE Using Multi-Cluster Services&lt;/a&gt; appeared first on &lt;a href="https://blog.yugabyte.com" rel="noopener noreferrer"&gt;The Distributed SQL Blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>distributedsql</category>
      <category>kubernetes</category>
      <category>googlecloudplatform</category>
    </item>
  </channel>
</rss>
