<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohamed Ammar</title>
    <description>The latest articles on DEV Community by Mohamed Ammar (@mohamed_ammar).</description>
    <link>https://dev.to/mohamed_ammar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mohamed_ammar"/>
    <language>en</language>
    <item>
      <title>Database Migration Guide: MySQL to PostgreSQL using pgloader and Prisma</title>
      <dc:creator>Mohamed Ammar</dc:creator>
      <pubDate>Sun, 07 Dec 2025 11:06:13 +0000</pubDate>
      <link>https://dev.to/mohamed_ammar/database-migration-guide-mysql-to-postgresql-using-pgloader-and-prisma-4083</link>
      <guid>https://dev.to/mohamed_ammar/database-migration-guide-mysql-to-postgresql-using-pgloader-and-prisma-4083</guid>
      <description>&lt;p&gt;&lt;em&gt;Published on: 07/12/2025 | Tags: #database #migration #mysql #postgresql #prisma #devops&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g2oriy0wc9i8er50u1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g2oriy0wc9i8er50u1c.png" alt="Pgloader to migrate from Mysql to postgres" width="800" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Migrating databases can be daunting, but with the right tools and approach, it becomes manageable. In this guide, I'll walk you through migrating from MySQL to PostgreSQL using &lt;strong&gt;pgloader&lt;/strong&gt; for data transfer and &lt;strong&gt;Prisma ORM&lt;/strong&gt; for schema management and evolution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Combination?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pgloader&lt;/strong&gt;: Excellent for bulk data migration with type casting and data transformation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prisma ORM&lt;/strong&gt;: Perfect for schema management, evolution, and providing type-safe database access&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Environment Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Linux environment (WSL on Windows, Ubuntu, or similar)&lt;/li&gt;
&lt;li&gt;MySQL database with existing schema and data&lt;/li&gt;
&lt;li&gt;PostgreSQL instance ready for migration&lt;/li&gt;
&lt;li&gt;Both instances could be local or remote (just update connection strings)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation Requirements
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install pgloader&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;pgloader

&lt;span class="c"&gt;# Install Node.js and npm (if not already installed)&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;nodejs npm

&lt;span class="c"&gt;# Install Prisma CLI version 6.x (not version 7)&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; prisma@6.x

&lt;span class="c"&gt;# Install necessary database connectors&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt; @prisma/client@6.x prisma@6.x
npm &lt;span class="nb"&gt;install&lt;/span&gt; @prisma/client-mysql@6.x @prisma/client-pg@6.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 1: Initialize Prisma and Extract MySQL Schema&lt;/p&gt;

&lt;p&gt;First, let's set up Prisma and extract our existing MySQL schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash
&lt;span class="c"&gt;# Initialize a new Prisma project&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;db-migration-project
&lt;span class="nb"&gt;cd &lt;/span&gt;db-migration-project
npx prisma init

&lt;span class="c"&gt;# Configure your Prisma schema to connect to MySQL&lt;/span&gt;
&lt;span class="c"&gt;# Update prisma/schema.prisma with MySQL connection&lt;/span&gt;
Your prisma/schema.prisma should look like this initially:

prisma
generator client &lt;span class="o"&gt;{&lt;/span&gt;
  provider &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"prisma-client-js"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

datasource db &lt;span class="o"&gt;{&lt;/span&gt;
  provider &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mysql"&lt;/span&gt;
  url      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"mysql://root:password@localhost:3306/railway"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
Now, pull the schema from your MySQL database:

bash
&lt;span class="c"&gt;# Extract schema from MySQL&lt;/span&gt;
npx prisma db pull

&lt;span class="c"&gt;# This creates a Prisma schema file representing your MySQL structure&lt;/span&gt;
&lt;span class="c"&gt;# Review the generated schema in prisma/schema.prisma&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Create Migration File&lt;br&gt;
Create your migration file for pgloader. Let's create a file called migration.load:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
bash
&lt;span class="c"&gt;# Create migration configuration&lt;/span&gt;
nano migration.load
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following configuration to migration.load:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;LOAD DATABASE
    FROM mysql://root:password@localhost:3306/railway
    INTO postgresql://postgres:postgres@localhost:5432/railway

    WITH include drop, 
         quote identifiers, 
         create tables,
         foreign keys, 
         create indexes,
         reset sequences,
         workers &lt;span class="o"&gt;=&lt;/span&gt; 8,
         concurrency &lt;span class="o"&gt;=&lt;/span&gt; 1

    CAST
        &lt;span class="nt"&gt;--&lt;/span&gt; Map MySQL datetime to PostgreSQL timestamp without &lt;span class="nb"&gt;time &lt;/span&gt;zone
        &lt;span class="nb"&gt;type &lt;/span&gt;datetime to timestamp,

        &lt;span class="nt"&gt;--&lt;/span&gt; Map large text types to PostgreSQL text
        &lt;span class="nb"&gt;type &lt;/span&gt;longtext to text,

        &lt;span class="nt"&gt;--&lt;/span&gt; Map MySQL integer types appropriately
        &lt;span class="nb"&gt;type &lt;/span&gt;int to integer,
        &lt;span class="nb"&gt;type &lt;/span&gt;tinyint to boolean using tinyint-to-boolean,

        &lt;span class="nt"&gt;--&lt;/span&gt; Handle MySQL specific types
        &lt;span class="nb"&gt;type &lt;/span&gt;year to integer

    BEFORE LOAD DO
        &lt;span class="nv"&gt;$$&lt;/span&gt; CREATE SCHEMA IF NOT EXISTS railway&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;$$&lt;/span&gt;,
        &lt;span class="nv"&gt;$$&lt;/span&gt; SET search_path TO railway&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;$$&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key Configuration Notes:&lt;br&gt;
include drop: Drops tables in PostgreSQL if they exist&lt;/p&gt;

&lt;p&gt;quote identifiers: Ensures special characters in table/column names are handled&lt;/p&gt;

&lt;p&gt;create tables: Creates tables in PostgreSQL&lt;/p&gt;

&lt;p&gt;reset sequences: Resets PostgreSQL sequences to match MySQL auto-increment values&lt;/p&gt;

&lt;p&gt;workers: Parallel workers for faster migration (adjust based on your system)&lt;/p&gt;

&lt;p&gt;Step 3: Execute Data Migration&lt;br&gt;
Now, run the migration using pgloader:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash
&lt;span class="c"&gt;# Execute the migration&lt;/span&gt;
pgloader migration.load
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Monitor the progress - pgloader provides real-time statistics&lt;br&gt;
What Happens During Migration:&lt;br&gt;
pgloader connects to both databases&lt;/p&gt;

&lt;p&gt;It reads the MySQL schema and creates equivalent PostgreSQL tables&lt;/p&gt;

&lt;p&gt;Data is transferred with appropriate type casting&lt;/p&gt;

&lt;p&gt;Indexes and foreign keys are recreated&lt;/p&gt;

&lt;p&gt;Sequences are reset to maintain auto-increment values&lt;/p&gt;

&lt;p&gt;Step 4: Handle Migration Report&lt;br&gt;
After migration completes, pgloader provides a detailed report:&lt;/p&gt;

&lt;p&gt;text&lt;/p&gt;
&lt;h2&gt;
  
  
  Summary report:
&lt;/h2&gt;

&lt;p&gt;Total transfer time     : 1m 30s&lt;br&gt;
  Total bytes transferred : 2.5 GB&lt;br&gt;
  Average transfer rate   : 28.3 MB/s&lt;br&gt;
  Errors                 : 0&lt;br&gt;
  Warnings               : 2&lt;br&gt;
Review any warnings or errors and address them accordingly.&lt;/p&gt;

&lt;p&gt;Step 5: Update Prisma to PostgreSQL&lt;br&gt;
Now that your data is in PostgreSQL, update your Prisma configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash
&lt;span class="c"&gt;# Update Prisma schema to use PostgreSQL&lt;/span&gt;
nano prisma/schema.prisma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the datasource provider to PostgreSQL:&lt;/p&gt;

&lt;p&gt;prisma&lt;br&gt;
datasource db {&lt;br&gt;
  provider = "postgresql"&lt;br&gt;
  url      = "postgresql://postgres:postgres@localhost:5432/railway"&lt;br&gt;
}&lt;br&gt;
Pull the schema from PostgreSQL to ensure Prisma understands the new database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bash
&lt;span class="c"&gt;# Extract schema from PostgreSQL&lt;/span&gt;
npx prisma db pull

&lt;span class="c"&gt;# Generate Prisma Client for PostgreSQL&lt;/span&gt;
npx prisma generate

&lt;span class="c"&gt;# Optional: Push schema to ensure consistency&lt;/span&gt;
npx prisma db push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 6: Update Your Application&lt;br&gt;
Update your application's database connection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;javascript
// In your application code
const &lt;span class="o"&gt;{&lt;/span&gt; PrismaClient &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; require&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'@prisma/client'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
const prisma &lt;span class="o"&gt;=&lt;/span&gt; new PrismaClient&lt;span class="o"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;// Your Prisma client now connects to PostgreSQL&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Using Prisma ORM for Schema Evolution
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Type Safety
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;typescript
// Full TypeScript/JavaScript &lt;span class="nb"&gt;type &lt;/span&gt;safety
const user &lt;span class="o"&gt;=&lt;/span&gt; await prisma.user.findUnique&lt;span class="o"&gt;({&lt;/span&gt;
  where: &lt;span class="o"&gt;{&lt;/span&gt; email: &lt;span class="s1"&gt;'user@example.com'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;
// &lt;span class="sb"&gt;`&lt;/span&gt;user&lt;span class="sb"&gt;`&lt;/span&gt; is fully typed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Easy Schema Migrations&lt;br&gt;
bash&lt;br&gt;
Make schema changes in Prisma schema file&lt;br&gt;
Then create and apply migrations&lt;br&gt;
npx prisma migrate dev --name add_new_feature&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database Agnostic&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Same Prisma schema can work with different databases&lt;/li&gt;
&lt;li&gt;Easy to switch or support multiple database backends&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Built-in Migration History&lt;br&gt;
Prisma maintains a migration history, making rollbacks and audits straightforward.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Developer Experience&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Intuitive data modeling&lt;/li&gt;
&lt;li&gt;Auto-completion in IDEs&lt;/li&gt;
&lt;li&gt;Built-in best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Application Testing&lt;br&gt;
Thoroughly test your application with the new PostgreSQL database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating from MySQL to PostgreSQL using pgloader and Prisma provides a robust, reliable approach. pgloader handles the heavy lifting of data transfer with proper type casting, while Prisma offers excellent schema management and evolution capabilities.&lt;/p&gt;

&lt;p&gt;The combination gives you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smooth data migration&lt;/li&gt;
&lt;li&gt;Type-safe database access&lt;/li&gt;
&lt;li&gt;Easy future schema changes&lt;/li&gt;
&lt;li&gt;Database abstraction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember to always backup both databases before migration and test thoroughly in a staging environment before production deployment.&lt;/p&gt;

&lt;p&gt;Happy migrating! &lt;/p&gt;

&lt;p&gt;About the Author: &lt;br&gt;
Mohamed Ammar, senior data architect with expertise in database systems and data architectures. Follow for more technical guides and tutorials!&lt;/p&gt;

&lt;p&gt;Disclaimer: This guide assumes you have appropriate backups and have tested the migration process in a non-production environment first. Always verify data integrity after migration.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating a Dynamic Café Website on AWS EC2: A DevOps Journey</title>
      <dc:creator>Mohamed Ammar</dc:creator>
      <pubDate>Tue, 28 Oct 2025 16:24:25 +0000</pubDate>
      <link>https://dev.to/mohamed_ammar/creating-a-dynamic-cafe-website-on-aws-ec2-a-devops-journey-14l5</link>
      <guid>https://dev.to/mohamed_ammar/creating-a-dynamic-cafe-website-on-aws-ec2-a-devops-journey-14l5</guid>
      <description>&lt;p&gt;In this technical walkthrough, I'll document how I built and deployed a dynamic café website across multiple AWS regions using EC2 instances, Secrets Manager, and a LAMP stack - all configured through AWS CLI and PowerShell.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Overview&lt;/strong&gt;&lt;br&gt;
The goal was to create a scalable web application for a café that can accept online orders, with separate development and production environments in different AWS regions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhywvgb589nm3qpc91yrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhywvgb589nm3qpc91yrx.png" alt=" " width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture Goals&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Development Environment: US East (N. Virginia) region&lt;/li&gt;
&lt;li&gt;Production Environment: US West (Oregon) region&lt;/li&gt;
&lt;li&gt;Technology Stack: LAMP (Linux, Apache, MySQL, PHP)&lt;/li&gt;
&lt;li&gt;Security: AWS Secrets Manager for sensitive configuration&lt;/li&gt;
&lt;li&gt;Deployment: Fully automated via AWS CLI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites Setup&lt;/strong&gt;&lt;br&gt;
AWS CLI Configuration&lt;br&gt;
Before starting, I configured AWS CLI with the appropriate credentials:&lt;/p&gt;

&lt;p&gt;powershell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Configure AWS CLI profile
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set default.region us-east-1
aws configure set default.output json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Phase 1: IAM Role Creation for Secrets Manager Access&lt;br&gt;
Creating the IAM Role and Policy&lt;br&gt;
powershell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create trust policy for EC2
$trustPolicy = @{
    "Version" = "2012-10-17"
    "Statement" = @(
        @{
            "Effect" = "Allow"
            "Principal" = @{
                "Service" = "ec2.amazonaws.com"
            }
            "Action" = "sts:AssumeRole"
        }
    )
} | ConvertTo-Json

$trustPolicy | Out-File -FilePath .\trust-policy.json -Encoding utf8

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create IAM role
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-role `
    --role-name CafeRole `
    --assume-role-policy-document file://trust-policy.json `
    --description "Role for Cafe EC2 instances to access Secrets Manager"

# Create Secrets Manager policy
$secretsManagerPolicy = @{
    "Version" = "2012-10-17"
    "Statement" = @(
        @{
            "Effect" = "Allow"
            "Action" = @(
                "secretsmanager:GetSecretValue",
                "secretsmanager:DescribeSecret",
                "secretsmanager:ListSecrets",
                "secretsmanager:CreateSecret",
                "secretsmanager:UpdateSecret"
            )
            "Resource" = "*"
        }
    )
} | ConvertTo-Json

$secretsManagerPolicy | Out-File -FilePath .\secrets-manager-policy.json -Encoding utf8

# Create and attach policy
aws iam create-policy `
    --policy-name CafeSecretsManagerPolicy `
    --policy-document file://secrets-manager-policy.json

aws iam attach-role-policy `
    --role-name CafeRole `
    --policy-arn "arn:aws:iam::$(aws sts get-caller-identity --query Account --output text):policy/CafeSecretsManagerPolicy"

# Create instance profile
aws iam create-instance-profile `
    --instance-profile-name CafeInstanceProfile

aws iam add-role-to-instance-profile `
    --instance-profile-name CafeInstanceProfile `
    --role-name CafeRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 2: Development Environment Setup&lt;/strong&gt;&lt;br&gt;
Launching EC2 Instance in US-East-1 with IAM Role&lt;br&gt;
powershell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create EC2 instance in us-east-1 with IAM role
aws ec2 run-instances `
    --image-id ami-0c02fb55956c7d316 `
    --instance-type t2.small `
    --key-name CafeKeyPair `
    --security-group-ids sg-cafe `
    --subnet-id subnet-123 `
    --iam-instance-profile Name=CafeInstanceProfile `
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Lab IDE}]' `
    --region us-east-1

# Wait for instance to be running
$instanceId = aws ec2 describe-instances `
    --filters "Name=tag:Name,Values=Lab IDE" `
    --query 'Reservations[0].Instances[0].InstanceId' `
    --output text `
    --region us-east-1

aws ec2 wait instance-running `
    --instance-ids $instanceId `
    --region us-east-1
Connecting via SSH (PuTTY)
Since we're using Windows with PuTTY, convert the key:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;powershell&lt;/p&gt;

&lt;h1&gt;
  
  
  Use PuTTYgen to convert .pem to .ppk
&lt;/h1&gt;

&lt;h1&gt;
  
  
  Connect via: ec2-user@ with CafeKeyPair.ppk
&lt;/h1&gt;

&lt;p&gt;Phase 3: LAMP Stack Configuration&lt;br&gt;
Installing and Configuring Apache Web Server&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update system packages
sudo dnf update -y

# Install Apache (httpd)
sudo dnf install -y httpd

# Configure Apache to listen on port 8000
sudo sed -i 's/Listen 80/Listen 8000/g' /etc/httpd/conf/httpd.conf

# Start and enable Apache
sudo systemctl start httpd
sudo systemctl enable httpd

# Verify Apache status
sudo service httpd status

# Install PHP
sudo dnf install -y php php-mysqli php-json
php --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing and Configuring MariaDB&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install MariaDB database
sudo dnf install -y mariadb105-server

# Start and enable MariaDB
sudo systemctl start mariadb
sudo systemctl enable mariadb

# Verify installation
sudo mariadb --version
sudo service mariadb status
Setting Up Development Environment
bash
# Create symlink for IDE access
ln -s /var/www/ /home/ec2-user/environment

# Change ownership for web directory
sudo chown ec2-user:ec2-user /var/www/html

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Create test webpage
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo '&amp;lt;html&amp;gt;Hello from the café web server!&amp;lt;/html&amp;gt;' &amp;gt; /var/www/html/index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 4: Application Deployment&lt;/strong&gt;&lt;br&gt;
Downloading and Extracting Application Files&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment

# Download application components
wget https://aws-tc-largeobjects.s3.us-west-2.amazonaws.com/CUR-TF-200-ACACAD-3-113230/03-lab-mod5-challenge-EC2/s3/setup.zip
wget https://aws-tc-largeobjects.s3.us-west-2.amazonaws.com/CUR-TF-200-ACACAD-3-113230/03-lab-mod5-challenge-EC2/s3/db.zip
wget https://aws-tc-largeobjects.s3.us-west-2.amazonaws.com/CUR-TF-200-ACACAD-3-113230/03-lab-mod5-challenge-EC2/s3/cafe.zip

# Extract files
unzip setup.zip
unzip db.zip
unzip cafe.zip -d /var/www/html/

# Install AWS SDK for PHP
cd /var/www/html/cafe/
wget https://docs.aws.amazon.com/aws-sdk-php/v3/download/aws.zip
wget https://docs.aws.amazon.com/aws-sdk-php/v3/download/aws.phar
unzip aws -d /var/www/html/cafe/

# Set appropriate permissions
chmod -R +r /var/www/html/cafe/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Phase 5: Secrets Manager Configuration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Setting Application Parameters&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/setup/
./set-app-parameters.sh
Verifying IAM Role Access
bash
# Test IAM role access
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

# Verify Secrets Manager access
aws secretsmanager list-secrets --region us-east-1
Phase 6: Database Setup
Configuring MySQL Database
bash
# Get database password from Secrets Manager
DB_PASSWORD=$(aws secretsmanager get-secret-value --secret-id /cafe/dbPassword --region us-east-1 --query SecretString --output text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;# Set root password and create database&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/environment/db/
./set-root-password.sh
./create-db.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Database Verification&lt;/strong&gt;&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Connect to MySQL database
mysql -u admin -p$DB_PASSWORD

# Within MySQL prompt:
show databases;
use cafe_db;
show tables;
select * from product;
exit;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;PHP Configuration&lt;/strong&gt;&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Configure PHP timezone
sudo sed -i "2i date.timezone = \"America/New_York\" " /etc/php.ini

# Restart Apache to apply changes
sudo service httpd restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 7: Testing the Application&lt;/strong&gt;&lt;br&gt;
Accessing the Website&lt;br&gt;
The café website should now be accessible at:&lt;/p&gt;

&lt;p&gt;http://:8000/cafe&lt;/p&gt;

&lt;p&gt;Testing Order Functionality&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Menu page&lt;/li&gt;
&lt;li&gt;Select items and submit orders&lt;/li&gt;
&lt;li&gt;Check Order History to verify order persistence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 8: Creating Production Environment&lt;/strong&gt;&lt;br&gt;
Preparing for AMI Creation&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set hostname
sudo hostname cafeserver

# Generate SSH key for future access
ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ""

# Add public key to authorized keys
cat ~/.ssh/id_rsa.pub &amp;gt;&amp;gt; ~/.ssh/authorized_keys
Creating AMI via AWS CLI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;powershell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create AMI from development instance
aws ec2 create-image `
    --instance-id $instanceId `
    --name "CafeServer" `
    --description "Cafe web server AMI" `
    --region us-east-1

# Wait for AMI to become available
aws ec2 wait image-available `
    --filters "Name=name,Values=CafeServer" `
    --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Launching Production Instance in US-West-2&lt;/strong&gt;&lt;br&gt;
powershell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Switch to Oregon region for EC2 operations
aws configure set region us-west-2

# Create security group in Oregon
aws ec2 create-security-group `
    --group-name cafe-sg-oregon `
    --description "Cafe security group for Oregon region" `
    --vpc-id vpc-0zzzzzzzzzz `
    --region us-west-2

$oregonSGId = aws ec2 describe-security-groups `
    --group-names cafe-sg-oregon `
    --query 'SecurityGroups[0].GroupId' `
    --output text `
    --region us-west-2

# Configure security group rules
aws ec2 authorize-security-group-ingress `
    --group-id $oregonSGId `
    --protocol tcp `
    --port 22 `
    --cidr 0.0.0.0/0 `
    --region us-west-2

aws ec2 authorize-security-group-ingress `
    --group-id $oregonSGId `
    --protocol tcp `
    --port 8000 `
    --cidr 0.0.0.0/0 `
    --region us-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;# Launch production instance with IAM role&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 run-instances `
    --image-id ami-0yyyyyyyyyyy `
    --instance-type t2.small `
    --security-group-ids $oregonSGId `
    --subnet-id subnet-0yyyyyyyyyyy `
    --iam-instance-profile Name=CafeInstanceProfile `
    --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=ProdCafeServer}]' `
    --region us-west-2

# Wait for instance to be running
$prodInstanceId = aws ec2 describe-instances `
    --filters "Name=tag:Name,Values=ProdCafeServer" `
    --query 'Reservations[0].Instances[0].InstanceId' `
    --output text `
    --region us-west-2

aws ec2 wait instance-running `
    --instance-ids $prodInstanceId `
    --region us-west-2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 9: Configuring Production Secrets&lt;/strong&gt;&lt;br&gt;
Creating Secrets in Oregon Region&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# On development instance, create Oregon region secrets
cd ~/environment/setup/

# Create modified script for Oregon
cat &amp;gt; set-app-parameters-oregon.sh &amp;lt;&amp;lt; 'EOF'
#!/bin/bash
region="us-west-2"

publicDNS=$(aws ec2 describe-instances \
    --filters "Name=tag:Name,Values=ProdCafeServer" \
    --query 'Reservations[0].Instances[0].PublicDnsName' \
    --output text \
    --region $region)

# Create secrets in Oregon region
aws secretsmanager create-secret \
    --name /cafe/dbPassword \
    --description "Database password for cafe application" \
    --secret-string "dbPassword123" \
    --region $region

aws secretsmanager create-secret \
    --name /cafe/dbName \
    --description "Database name for cafe application" \
    --secret-string "cafe_db" \
    --region $region

echo "Secrets created successfully in $region"
EOF

chmod +x set-app-parameters-oregon.sh
./set-app-parameters-oregon.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Phase 10: Production Verification&lt;/strong&gt;&lt;br&gt;
Testing Production Environment&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get production instance public IP
PROD_IP=$(aws ec2 describe-instances `
    --instance-ids $prodInstanceId `
    --query 'Reservations[0].Instances[0].PublicIpAddress' `
    --output text `
    --region us-west-2)

echo "Production site URL: http://$PROD_IP:8000/cafe"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;IAM Role Verification Script&lt;br&gt;
bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
# iam-role-test.sh

echo "Testing IAM Role Configuration..."
curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/
aws sts get-caller-identity
aws secretsmanager list-secrets --max-items 5
echo "IAM Role test completed."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Final Architecture&lt;br&gt;
With the IAM role properly configured, our architecture includes:&lt;/p&gt;

&lt;p&gt;Development Region (us-east-1):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 instance with CafeRole attached&lt;/li&gt;
&lt;li&gt;Access to Secrets Manager in us-east-1&lt;/li&gt;
&lt;li&gt;Full LAMP stack application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Production Region (us-west-2):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 instance with same CafeRole attached&lt;/li&gt;
&lt;li&gt;Access to Secrets Manager in us-west-2&lt;/li&gt;
&lt;li&gt;Identical application deployment&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Global IAM Infrastructure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;CafeRole with Secrets Manager permissions&lt;/li&gt;
&lt;li&gt;CafeInstanceProfile for EC2 attachment&lt;/li&gt;
&lt;li&gt;Cross-region consistency in access control&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM is Critical: Without proper IAM roles, the application cannot access Secrets Manager&lt;/li&gt;
&lt;li&gt;Regional Considerations: AWS resources are region-specific, requiring careful planning&lt;/li&gt;
&lt;li&gt;Security Best Practices: Using Secrets Manager significantly improves security&lt;/li&gt;
&lt;li&gt;Automation Benefits: AWS CLI commands enable reproducible deployments&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This project successfully demonstrated a complete DevOps workflow for deploying a dynamic web application across multiple AWS regions. The café now has a robust, scalable ordering system with proper IAM role configuration for secure Secrets Manager access, ensuring both development and production environments function correctly.&lt;/p&gt;

&lt;p&gt;The entire infrastructure can be recreated programmatically using the AWS CLI commands and scripts documented in this post.&lt;/p&gt;

</description>
      <category>awsacademy</category>
      <category>devops</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Deploying StarRocks in Shared Data Mode on Minikube with S3 Integration</title>
      <dc:creator>Mohamed Ammar</dc:creator>
      <pubDate>Sun, 14 Sep 2025 21:11:04 +0000</pubDate>
      <link>https://dev.to/mohamed_ammar/deploying-starrocks-in-shared-data-mode-on-minikube-with-s3-integration-22fj</link>
      <guid>https://dev.to/mohamed_ammar/deploying-starrocks-in-shared-data-mode-on-minikube-with-s3-integration-22fj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpk6gbmslw58y5xify8tu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpk6gbmslw58y5xify8tu.png" alt=" " width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Modern Data Stack&lt;/strong&gt;&lt;br&gt;
In the world of real-time analytics, the ability to query massive datasets at lightning speed is not just a luxury—it's a necessity. StarRocks has emerged as a powerhouse in this space, renowned for its sub-second query performance on petabyte-scale data. Its native vectorized execution engine and cost-based optimizer make it a top contender for replacing complex, multi-component data architectures.&lt;/p&gt;

&lt;p&gt;But how do you manage and scale such a high-performance database? &lt;/p&gt;

&lt;p&gt;The de facto standard for container orchestration, Kubernetes provides the elasticity, resilience, and portability that modern applications demand. By running StarRocks on Kubernetes, you can automate deployments, scaling, and management, making your analytics infrastructure as agile as your code.&lt;/p&gt;

&lt;p&gt;In this guide, I'll dive into a particularly powerful feature: StarRocks' Shared Data Mode. This architecture decouples compute from storage. Your compute nodes (CNs) are stateless and can be spun up or down in seconds, while your data remains safely and durably stored in a central repository like Amazon S3. This means you can scale your compute resources elastically based on query load, leading to significant cost savings and performance optimization.&lt;/p&gt;

&lt;p&gt;We'll walk through setting this all up on a local Minikube cluster running on an EC2 machine, providing a perfect sandbox for development, testing, and learning.&lt;/p&gt;

&lt;p&gt;📋 Prerequisites&lt;br&gt;
Before we begin, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 with Linux OS&lt;/li&gt;
&lt;li&gt;AWS Account: With credentials (Access Key and Secret Key) for an S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛠️ &lt;strong&gt;Phase 1: Setting Up Our Kubernetes playground on EC2&lt;/strong&gt;&lt;br&gt;
First, we need a machine to host our cluster. An EC2 instance is perfect for this.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch an EC2 Instance&lt;/li&gt;
&lt;li&gt;Log into your AWS Console and navigate to EC2.&lt;/li&gt;
&lt;li&gt;Launch a new instance. A t2.xlarge or c5.2xlarge (8 vCPUs, 16 GiB RAM) is recommended to ensure Minikube has enough resources.&lt;/li&gt;
&lt;li&gt;Select an Amazon Linux &lt;/li&gt;
&lt;li&gt;Configure security groups to allow SSH (port 22) access from your IP.&lt;/li&gt;
&lt;li&gt;Launch the instance, ensuring you have the .pem key pair to connect.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Connect and Prepare the EC2 Instance
Connect via SSH using your terminal or SSH client.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once logged in, update the system and install necessary base packages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update -y 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1i696p7tas1lgzuq7pr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1i696p7tas1lgzuq7pr.png" alt=" " width="792" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📁 Files Required&lt;br&gt;
We've prepared a set of files to automate and configure our setup. Download them to your EC2 instance into the same directory.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;env_setup.sh&lt;/strong&gt;: Installs Minikube, kubectl, Helm, and other dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;configmap.yaml&lt;/strong&gt;: Contains the StarRocks configuration to enable Shared Data mode.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;starrocks-cluster.yaml&lt;/strong&gt;: The main manifest defining our StarRocks cluster (FE, BE, CN).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;test-s3.py&lt;/strong&gt;: A simple script to validate our S3 credentials before deployment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can create these files directly on the EC2 instance using vim or nano.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukwedf5dkmp2661shv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukwedf5dkmp2661shv8.png" alt=" " width="715" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;env_setup.sh:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Install Minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

# Install Docker (required by Minikube's docker driver)
sudo yum install docker -y # Use apt for Ubuntu
sudo usermod -aG docker $USER &amp;amp;&amp;amp; newgrp docker
sudo systemctl start docker
sudo systemctl enable docker

echo "All tools installed! Please log out and back in for group changes to take effect, or run 'newgrp docker'."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;configmap.yaml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: poc-starrockscluster-fe-cm
  labels:
    cluster: starrockscluster-poc-cp
data:
  fe.conf: |
    LOG_DIR = ${STARROCKS_HOME}/log
    DATE = "$(date +%Y%m%d-%H%M%S)"
    JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:${LOG_DIR}/fe.gc.log.$DATE"
    JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:${LOG_DIR}/fe.gc.log.$DATE:time"
    JAVA_OPTS_FOR_JDK_11="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseG1GC -Xlog:gc*:${LOG_DIR}/fe.gc.log.$DATE:time"
    http_port = 8030
    rpc_port = 9020
    query_port = 9030
    edit_log_port = 9010
    mysql_service_nio_enabled = true
    sys_log_level = INFO
    # config for shared-data mode
    run_mode = shared_data
    cloud_native_meta_port = 6090
    # Whether volume can be created from conf. If it is enabled, a builtin storage volume may be created.
    enable_load_volume_from_conf = true

    # GCS uses S3 Protocol
    cloud_native_storage_type = S3

    # For example, testbucket/subpath
    aws_s3_path = &amp;lt;YOUR-BUCKET-NAME&amp;gt;


    # For example: us-east1
    aws_s3_region = &amp;lt;YOUR-AWS-REGION&amp;gt;

    # For example: https://s3.amazonaws.com
    aws_s3_endpoint = https://s3.amazonaws.com
    aws_s3_access_key = "&amp;lt;YOUR-ACCESS-KEY&amp;gt;"
    aws_s3_secret_key = "&amp;lt;YOUR-SECRET-KEY&amp;gt;"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;starrocks-cluster.yaml:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#This manifest deploys a StarRocks cluster running in shared data mode.
# see https://docs.starrocks.io/docs/cover_pages/shared_data_deployment/ for more information about shared-data mode.
#
# You will have to download and edit this YAML file to specify the details for your shared storage. See the
# examples in the docs, and add your customizations to the ConfigMap `starrockscluster-sample-fe-cm` at the
# bottom of this file.
# https://docs.starrocks.io/en-us/latest/deployment/deploy_shared_data#configure-fe-nodes-for-shared-data-starrocks

apiVersion: starrocks.com/v1
kind: StarRocksCluster
metadata:
  name: poc-starrocks-cluster   # change the name if needed.
spec:
  starRocksFeSpec:
    image: starrocks/fe-ubuntu:3.2.7  
    replicas: 3
    limits:
      memory: 3Gi
    requests:
      cpu: '1'
      memory: 1Gi
    configMapInfo:
      configMapName: poc-starrockscluster-fe-cm
      resolveKey: fe.conf
  starRocksCnSpec:
    image: starrocks/cn-ubuntu:3.2.7   #try 3.3-latest
    replicas: 1
    limits:
      memory: 5Gi
    requests:
      cpu: '1'
      memory: 2Gi
    autoScalingPolicy: # Automatic scaling policy of the CN cluster.
      maxReplicas: 10 # The maximum number of CNs is set to 10.
      minReplicas: 1 # The minimum number of CNs is set to 1.
      # operator creates an HPA resource based on the following field.
      # see https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ for more information.
      hpaPolicy:
        metrics: # Resource metrics
          - type: Resource
            resource:
              name: memory  # The average memory usage of CNs is specified as a resource metric.
              target:
                averageUtilization: 15
                type: Utilization
          - type: Resource
            resource:
              name: cpu # The average CPU utilization of CNs is specified as a resource metric.
              target:
                averageUtilization: 15
                type: Utilization
        behavior: #  The scaling behavior is customized according to business scenarios, helping you achieve rapid or slow scaling or disable scaling.
          scaleUp:
            policies:
              - type: Pods
                value: 1
                periodSeconds: 10
          scaleDown:
            policies:
              - type: Pods
                value: 1
                periodSeconds: 60
            stabilizationWindowSeconds: 300
            selectPolicy: Max

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: poc-starrockscluster-fe-cm
  labels:
    cluster: starrockscluster-poc-cp
data:
  fe.conf: |
    LOG_DIR = ${STARROCKS_HOME}/log
    DATE = "$(date +%Y%m%d-%H%M%S)"
    JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:${LOG_DIR}/fe.gc.log.$DATE"
    JAVA_OPTS_FOR_JDK_9="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:${LOG_DIR}/fe.gc.log.$DATE:time"
    JAVA_OPTS_FOR_JDK_11="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseG1GC -Xlog:gc*:${LOG_DIR}/fe.gc.log.$DATE:time"
    http_port = 8030
    rpc_port = 9020
    query_port = 9030
    edit_log_port = 9010
    mysql_service_nio_enabled = true
    sys_log_level = INFO
    # config for shared-data mode
    run_mode = shared_data
    cloud_native_meta_port = 6090
    # Whether volume can be created from conf. If it is enabled, a builtin storage volume may be created.
    enable_load_volume_from_conf = true

    # GCS uses S3 Protocol
    cloud_native_storage_type = S3

    # For example, testbucket/subpath
    aws_s3_path = "&amp;lt;YOUR-BUCKET-NAME&amp;gt;"

    # For example: us-east1
    aws_s3_region = us-west-2

    # For example: https://s3.amazonaws.com
    aws_s3_endpoint = https://s3.amazonaws.com

    aws_s3_access_key = "&amp;lt;YOUR-ACCESS-KEY&amp;gt;"
    aws_s3_secret_key = "&amp;lt;YOUR-SECRET-KEY&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;test-s3.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from botocore.exceptions import ClientError

BUCKET_NAME = "&amp;lt;YOUR-BUCKET-NAME&amp;gt;"
REGION = "&amp;lt;YOUR-AWS-REGION&amp;gt;"
ACCESS_KEY = "&amp;lt;YOUR-ACCESS-KEY&amp;gt;"
SECRET_KEY = "&amp;lt;YOUR-SECRET-KEY&amp;gt;"

s3 = boto3.client(
    's3',
    region_name=REGION,
    aws_access_key_id=ACCESS_KEY,
    aws_secret_access_key=SECRET_KEY
)

try:
    response = s3.list_buckets()
    print("Connection successful! Available buckets:")
    for bucket in response['Buckets']:
        print(f'  {bucket["Name"]}')

    # Try a head bucket operation for a more specific check
    s3.head_bucket(Bucket=BUCKET_NAME)
    print(f"\nSuccessfully accessed the target bucket: {BUCKET_NAME}")

except ClientError as e:
    print(f"Error: {e}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔐** Phase 2: Configure S3 Access**&lt;/p&gt;

&lt;p&gt;Our entire setup hinges on StarRocks being able to communicate with S3. Let's test this first.&lt;/p&gt;

&lt;p&gt;Edit both test-s3.py and configmap.yaml. Replace all placeholders (&amp;lt;...&amp;gt;, your-...) with your actual S3 Bucket Name, Region, Access Key, and Secret Key.&lt;/p&gt;

&lt;p&gt;Install the Boto3 library and run the test script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install boto3
python3 test-s3.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A successful output confirms your credentials and permissions are correct. Fix any errors here before proceeding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg9khv2z4q7gfb331nsx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg9khv2z4q7gfb331nsx.png" alt=" " width="697" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Phase 3: Setup Environment &amp;amp; Start Minikube&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, let's turn our EC2 instance into a single-node Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Make the setup script executable and run it:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x env_setup.sh
./env_setup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start Minikube with adequate resources. The shared data mode is memory and CPU intensive.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube start --driver=docker --cpus=8 --memory=12288
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the cluster is running:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
kubectl get nodes&lt;/p&gt;

&lt;p&gt;🚀** Phase 4: Install the StarRocks Kubernetes Operator**&lt;/p&gt;

&lt;p&gt;Operators are Kubernetes-native applications that manage complex stateful services like databases. We'll use the StarRocks operator to deploy our cluster.&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add starrocks https://starrocks.github.io/starrocks-kubernetes-operator
helm repo update
helm install starrocks-operator starrocks/operator \
  --create-namespace --namespace starrocks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if the operator pod is running:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n starrocks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📦 &lt;strong&gt;Phase 5: Deploy the StarRocks Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the operator running, we can now deploy our custom-configured StarRocks cluster.&lt;/p&gt;

&lt;p&gt;Apply the configuration that points to our S3 bucket:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy the cluster itself:&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f starrocks-cluster.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch the pods come up. This may take a few minutes as it pulls large container images.&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n starrocks 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until you see an output similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bydadqtqavgr7diz8zn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bydadqtqavgr7diz8zn.png" alt=" " width="682" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice the three FE pods for high availability and the single CN pod. There are no BE pods.&lt;/p&gt;

&lt;p&gt;🔌 &lt;strong&gt;Phase 6: Connect and Load Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's interact with our cluster and load some sample data.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect to the MySQL Client
We'll exec into the FE pod to use the built-in MySQL client.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec --stdin --tty poc-starrocks-cluster-fe-0 --   mysql -P9030 -h127.0.0.1 -u root --prompt="StarRocks &amp;gt; "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Database and Tables
Run these SQL commands inside the MySQL client:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;sql&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE IF NOT EXISTS quickstart;
USE quickstart;

CREATE TABLE IF NOT EXISTS crashdata (
    CRASH_DATE DATETIME,
    BOROUGH STRING,
    ZIP_CODE STRING,
    LATITUDE INT,
    LONGITUDE INT,
    LOCATION STRING,
    ON_STREET_NAME STRING,
    CROSS_STREET_NAME STRING,
    OFF_STREET_NAME STRING,
    CONTRIBUTING_FACTOR_VEHICLE_1 STRING,
    CONTRIBUTING_FACTOR_VEHICLE_2 STRING,
    COLLISION_ID INT,
    VEHICLE_TYPE_CODE_1 STRING,
    VEHICLE_TYPE_CODE_2 STRING
);
CREATE TABLE IF NOT EXISTS weatherdata (
    DATE DATETIME,
    NAME STRING,
    HourlyDewPointTemperature STRING,
    HourlyDryBulbTemperature STRING,
    HourlyPrecipitation STRING,
    HourlyPresentWeatherType STRING,
    HourlyPressureChange STRING,
    HourlyPressureTendency STRING,
    HourlyRelativeHumidity STRING,
    HourlySkyConditions STRING,
    HourlyVisibility STRING,
    HourlyWetBulbTemperature STRING,
    HourlyWindDirection STRING,
    HourlyWindGustSpeed STRING,
    HourlyWindSpeed STRING
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjxtv5a717wo3e8z93am.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjxtv5a717wo3e8z93am.png" alt=" " width="800" height="179"&gt;&lt;/a&gt;&lt;br&gt;
Type exit to leave the MySQL client.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download and Upload Sample Datasets
bash
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -O https://raw.githubusercontent.com/StarRocks/demo/master/documentation-samples/quickstart/datasets/NYPD_Crash_Data.csv
curl -O https://raw.githubusercontent.com/StarRocks/demo/master/documentation-samples/quickstart/datasets/72505394728.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  Copy the files into the FE pod
&lt;/h1&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp ./NYPD_Crash_Data.csv poc-starrocks-cluster-fe-0:/tmp/NYPD_Crash_Data.csv -n default
kubectl cp ./72505394728.csv poc-starrocks-cluster-fe-0:/tmp/72505394728.csv -n default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feybr0sbh94hphnpl9cxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feybr0sbh94hphnpl9cxi.png" alt=" " width="800" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get a shell inside the FE pod to run the curl commands for loading data.&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -it poc-starrocks-cluster-fe-0 -n default -- /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inside the container, run the two curl commands from your outline to load data into the weatherdata and crashdata tables. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember, just press Enter when prompted for a password&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location-trusted -u root             \
    -T /tmp/72505394728.csv                    \
    -H "label:weather-0"                    \
    -H "column_separator:,"                 \
    -H "skip_header:1"                      \
    -H "enclose:\""                         \
    -H "max_filter_ratio:1"                 \
    -H "columns: STATION, DATE, LATITUDE, LONGITUDE, ELEVATION, NAME, REPORT_TYPE, SOURCE, HourlyAltimeterSetting, HourlyDewPointTemperature, HourlyDryBulbTemperature, HourlyPrecipitation, HourlyPresentWeatherType, HourlyPressureChange, HourlyPressureTendency, HourlyRelativeHumidity, HourlySkyConditions, HourlySeaLevelPressure, HourlyStationPressure, HourlyVisibility, HourlyWetBulbTemperature, HourlyWindDirection, HourlyWindGustSpeed, HourlyWindSpeed, Sunrise, Sunset, DailyAverageDewPointTemperature, DailyAverageDryBulbTemperature, DailyAverageRelativeHumidity, DailyAverageSeaLevelPressure, DailyAverageStationPressure, DailyAverageWetBulbTemperature, DailyAverageWindSpeed, DailyCoolingDegreeDays, DailyDepartureFromNormalAverageTemperature, DailyHeatingDegreeDays, DailyMaximumDryBulbTemperature, DailyMinimumDryBulbTemperature, DailyPeakWindDirection, DailyPeakWindSpeed, DailyPrecipitation, DailySnowDepth, DailySnowfall, DailySustainedWindDirection, DailySustainedWindSpeed, DailyWeather, MonthlyAverageRH, MonthlyDaysWithGT001Precip, MonthlyDaysWithGT010Precip, MonthlyDaysWithGT32Temp, MonthlyDaysWithGT90Temp, MonthlyDaysWithLT0Temp, MonthlyDaysWithLT32Temp, MonthlyDepartureFromNormalAverageTemperature, MonthlyDepartureFromNormalCoolingDegreeDays, MonthlyDepartureFromNormalHeatingDegreeDays, MonthlyDepartureFromNormalMaximumTemperature, MonthlyDepartureFromNormalMinimumTemperature, MonthlyDepartureFromNormalPrecipitation, MonthlyDewpointTemperature, MonthlyGreatestPrecip, MonthlyGreatestPrecipDate, MonthlyGreatestSnowDepth, MonthlyGreatestSnowDepthDate, MonthlyGreatestSnowfall, MonthlyGreatestSnowfallDate, MonthlyMaxSeaLevelPressureValue, MonthlyMaxSeaLevelPressureValueDate, MonthlyMaxSeaLevelPressureValueTime, MonthlyMaximumTemperature, MonthlyMeanTemperature, MonthlyMinSeaLevelPressureValue, MonthlyMinSeaLevelPressureValueDate, MonthlyMinSeaLevelPressureValueTime, MonthlyMinimumTemperature, MonthlySeaLevelPressure, MonthlyStationPressure, MonthlyTotalLiquidPrecipitation, MonthlyTotalSnowfall, MonthlyWetBulb, AWND, CDSD, CLDD, DSNW, HDSD, HTDD, NormalsCoolingDegreeDay, NormalsHeatingDegreeDay, ShortDurationEndDate005, ShortDurationEndDate010, ShortDurationEndDate015, ShortDurationEndDate020, ShortDurationEndDate030, ShortDurationEndDate045, ShortDurationEndDate060, ShortDurationEndDate080, ShortDurationEndDate100, ShortDurationEndDate120, ShortDurationEndDate150, ShortDurationEndDate180, ShortDurationPrecipitationValue005, ShortDurationPrecipitationValue010, ShortDurationPrecipitationValue015, ShortDurationPrecipitationValue020, ShortDurationPrecipitationValue030, ShortDurationPrecipitationValue045, ShortDurationPrecipitationValue060, ShortDurationPrecipitationValue080, ShortDurationPrecipitationValue100, ShortDurationPrecipitationValue120, ShortDurationPrecipitationValue150, ShortDurationPrecipitationValue180, REM, BackupDirection, BackupDistance, BackupDistanceUnit, BackupElements, BackupElevation, BackupEquipment, BackupLatitude, BackupLongitude, BackupName, WindEquipmentChangeDate" \
    -XPUT http://127.0.0.1:8030/api/quickstart/weatherdata/_stream_load
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlasdc7yt68wb7zconmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlasdc7yt68wb7zconmd.png" alt=" " width="800" height="160"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    curl --location-trusted -u root             \
    -T /tmp/NYPD_Crash_Data.csv                \
    -H "label:crashdata-0"                  \
    -H "column_separator:,"                 \
    -H "skip_header:1"                      \
    -H "enclose:\""                         \
    -H "max_filter_ratio:1"                 \
    -H "columns:tmp_CRASH_DATE, tmp_CRASH_TIME, CRASH_DATE=str_to_date(concat_ws(' ', tmp_CRASH_DATE, tmp_CRASH_TIME), '%m/%d/%Y %H:%i'),BOROUGH,ZIP_CODE,LATITUDE,LONGITUDE,LOCATION,ON_STREET_NAME,CROSS_STREET_NAME,OFF_STREET_NAME,NUMBER_OF_PERSONS_INJURED,NUMBER_OF_PERSONS_KILLED,NUMBER_OF_PEDESTRIANS_INJURED,NUMBER_OF_PEDESTRIANS_KILLED,NUMBER_OF_CYCLIST_INJURED,NUMBER_OF_CYCLIST_KILLED,NUMBER_OF_MOTORIST_INJURED,NUMBER_OF_MOTORIST_KILLED,CONTRIBUTING_FACTOR_VEHICLE_1,CONTRIBUTING_FACTOR_VEHICLE_2,CONTRIBUTING_FACTOR_VEHICLE_3,CONTRIBUTING_FACTOR_VEHICLE_4,CONTRIBUTING_FACTOR_VEHICLE_5,COLLISION_ID,VEHICLE_TYPE_CODE_1,VEHICLE_TYPE_CODE_2,VEHICLE_TYPE_CODE_3,VEHICLE_TYPE_CODE_4,VEHICLE_TYPE_CODE_5" \
    -XPUT http://127.0.0.1:8030/api/quickstart/crashdata/_stream_load
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5dtixzt7rnsvvjg8bfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5dtixzt7rnsvvjg8bfa.png" alt=" " width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📈 &lt;strong&gt;Phase 7: Test Queries and Observe Auto-Scaling&lt;/strong&gt;&lt;br&gt;
The magic of Shared Data Mode is its elasticity. Let's see it in action.&lt;/p&gt;

&lt;p&gt;Enable Metrics Server: Minikube needs this for the Horizontal Pod Autoscaler (HPA) to work.&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube addons enable metrics-server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run a Sample Query: Connect again with the MySQL client and run the analytical queries from the outline.&lt;/p&gt;

&lt;p&gt;Trigger a Scale-Out: Run a heavy, full-table scan. This will spike CPU usage.&lt;/p&gt;

&lt;p&gt;Connect to the MySQL Client&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec --stdin --tty poc-starrocks-cluster-fe-0 --   mysql -P9030 -h127.0.0.1 -u root --prompt="StarRocks &amp;gt; "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;sql&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM crashdata;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Watch the Autoscaler Work: Open a new terminal window on your EC2 instance and watch the pods.&lt;/p&gt;

&lt;p&gt;bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;watch kubectl get pods -n starrocks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ You should observe:&lt;/p&gt;

&lt;p&gt;After a minute or so of the heavy query, a new CN pod (e.g., plaid-starrocks-cluster-cn-1) will appear with status ContainerCreating and then Running.&lt;/p&gt;

&lt;p&gt;The HPA automatically provisioned it to handle the load!&lt;/p&gt;

&lt;p&gt;Wait 3-5 minutes after the query finishes. You will see the extra CN pod automatically terminate. This is the autoscaler saving resources by scaling down.&lt;/p&gt;

&lt;p&gt;🚨 &lt;strong&gt;Clean-Up: Don't forget to tear down your environment to avoid unnecessary costs!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrbx4kn59db8yy11t5l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrbx4kn59db8yy11t5l8.png" alt=" " width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
This modern data platform successfully decouples compute from storage, enabling you to scale horizontally and seamlessly. The benefits are clear: significant cost savings by only paying for the compute you use, blistering query performance powered by elastic resources, and the agility to handle any analytical workload on demand. You've just built the future of data analytics.&lt;/p&gt;

&lt;p&gt;This setup is not just for learning; it mirrors the architecture used in production environments to achieve both high performance and cost efficiency. You can now stop your Minikube cluster (minikube stop) and even terminate your EC2 instance to avoid unnecessary costs, knowing you can recreate this entire environment from scratch using the code and configs you've written.&lt;/p&gt;

</description>
      <category>starrocks</category>
      <category>aws</category>
      <category>moderndataplatform</category>
      <category>datalake</category>
    </item>
    <item>
      <title>How to Upgrade Tableau Server from 2023 (CentOS 7) or equivalent to 2025 (Ubuntu 24.04 LTS) Using Blue-Green Deployment</title>
      <dc:creator>Mohamed Ammar</dc:creator>
      <pubDate>Thu, 31 Jul 2025 17:40:15 +0000</pubDate>
      <link>https://dev.to/mohamed_ammar/how-to-upgrade-tableau-server-from-2023-centos-7-or-equivalent-to-2025-ubuntu-2404-lts-using-28n1</link>
      <guid>https://dev.to/mohamed_ammar/how-to-upgrade-tableau-server-from-2023-centos-7-or-equivalent-to-2025-ubuntu-2404-lts-using-28n1</guid>
      <description>&lt;h4&gt;
  
  
  &lt;strong&gt;Author:&lt;/strong&gt; Mohamed Ammar, Senior Data Architect at Vodafone
&lt;/h4&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Last Updated:&lt;/strong&gt; July 2025
&lt;/h4&gt;




&lt;h2&gt;
  
  
  Why This Guide?
&lt;/h2&gt;

&lt;p&gt;If you're running &lt;strong&gt;Tableau Server 2023 or earlier on CentOS 7&lt;/strong&gt;, you're likely aware that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CentOS 7 is deprecated&lt;/strong&gt; (EOL: June 2024).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tableau 2023 support is ending&lt;/strong&gt;, and extended support will cost money.&lt;/li&gt;
&lt;li&gt;You need a &lt;strong&gt;modern, secure, and supported environment&lt;/strong&gt;—and fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This guide walks you through &lt;strong&gt;upgrading to Tableau Server 2025 on Ubuntu 24.04 LTS&lt;/strong&gt; using a &lt;strong&gt;Blue-Green deployment&lt;/strong&gt; strategy. This ensures &lt;strong&gt;zero downtime&lt;/strong&gt; and a &lt;strong&gt;safe rollback path&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A new VM provisioned (e.g., on AWS) with:

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;16 vCPUs&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;32 GB RAM&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;300 GB SSD&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;OS: &lt;strong&gt;Ubuntu 24.04 LTS&lt;/strong&gt;
&lt;/li&gt;

&lt;li&gt;SSH access (e.g., via PuTTY)&lt;/li&gt;

&lt;li&gt;Tableau license key from your current server&lt;/li&gt;

&lt;li&gt;Access to the &lt;code&gt;.tsbak&lt;/code&gt; backup file from your old server&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  Step 1: Prepare the New Environment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  SSH into the VM
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; your-key.pem ubuntu@your-vm-ip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update the System
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Download Tableau Server
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Register on Tableau’s website to download the &lt;code&gt;.deb&lt;/code&gt; package.&lt;/li&gt;
&lt;li&gt;Download &lt;code&gt;tableau-server-&amp;lt;version&amp;gt;_amd64.deb\&lt;/code&gt; to your VM.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Install Dependencies
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; gdebi-core
&lt;span class="nb"&gt;sudo &lt;/span&gt;gdebi &lt;span class="nt"&gt;-n&lt;/span&gt; tableau-server-&amp;lt;version&amp;gt;_amd64.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initialize TSM
&lt;/h3&gt;

&lt;p&gt;After installation, run the command printed by the installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; ./initialize-tsm &lt;span class="nt"&gt;--accepteula&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Activate and Register Tableau Server
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Switch to Tableau User
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su &lt;span class="nt"&gt;-l&lt;/span&gt; tableau
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Activate License
&lt;/h3&gt;

&lt;p&gt;Get your license key from the old server’s TSM UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm licenses activate &lt;span class="nt"&gt;-k&lt;/span&gt; &amp;lt;your_license_key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set a Password for Tableau User
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt;
passwd tableau
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Register the Server
&lt;/h2&gt;

&lt;p&gt;Create a registration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano registration_file.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Paste and customize:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"first_name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Andrew"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"last_name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Smith"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"phone"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"311-555-2368"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"andrew.smith@mycompany.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"company"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"My Company"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"industry"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Finance"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"company_employees"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"500"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;            
    &lt;/span&gt;&lt;span class="nl"&gt;"department"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Engineering"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Senior Manager"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"city"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Kirkland"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"state"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"WA"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"zip"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"98034"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"country"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"United States"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"opt_in"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"false"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"eula"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"accept"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Register:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm register &lt;span class="nt"&gt;--template&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /path/to/registration_file.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm register &lt;span class="nt"&gt;--file&lt;/span&gt; registration_file.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Configure Identity Store
&lt;/h2&gt;

&lt;p&gt;If you're using &lt;strong&gt;local identity store&lt;/strong&gt;, import your config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm settings import &lt;span class="nt"&gt;-f&lt;/span&gt; /opt/tableau/tableau_server/packages/scripts.&amp;lt;version_code&amp;gt;/config.json
tsm pending-changes apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 5: Start Tableau Server
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm initialize &lt;span class="nt"&gt;--start-server&lt;/span&gt; &lt;span class="nt"&gt;--request-timeout&lt;/span&gt; 1800
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the admin user (change admin password!):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tabcmd initialuser &lt;span class="nt"&gt;--server&lt;/span&gt; &lt;span class="s1"&gt;'localhost:80'&lt;/span&gt; &lt;span class="nt"&gt;--username&lt;/span&gt; &lt;span class="s1"&gt;'admin'&lt;/span&gt; &lt;span class="nt"&gt;--password&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;your_password&amp;gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 6: Install PostgreSQL Drivers
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Download from: &lt;a href="https://www.tableau.com/support/drivers" rel="noopener noreferrer"&gt;Tableau Driver Downloads&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Copy the &lt;code&gt;.jar&lt;/code&gt; file to: (create the path it does not exist)
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/opt/tableau/tableau_driver/jdbc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 7: Restart TSM
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm restart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 8: Migrate Data from Old Server
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Backup on Old Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm maintenance backup &lt;span class="nt"&gt;-f&lt;/span&gt; backup_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a &lt;code&gt;.tsbak&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transfer Backup
&lt;/h3&gt;

&lt;p&gt;Use AWS internal networking or S3 bucket to transfer the file to the new VM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Restore on New Server
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tsm maintenance restore &lt;span class="nt"&gt;-f&lt;/span&gt; backup_file.tsbak
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 9: Reapply Topology and Configuration
&lt;/h2&gt;

&lt;p&gt;Manually replicate settings from the old TSM admin panel to the new one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Test dashboards and data sources before switching DNS or load balancer to the new server.&lt;/li&gt;
&lt;li&gt;Keep the old server running until you're confident in the new setup.&lt;/li&gt;
&lt;li&gt;This Blue-Green approach ensures &lt;strong&gt;minimal risk&lt;/strong&gt; and &lt;strong&gt;maximum uptime&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
  </channel>
</rss>
