<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 프링글리스</title>
    <description>The latest articles on DEV Community by 프링글리스 (@_a3742acef86a9239f63).</description>
    <link>https://dev.to/_a3742acef86a9239f63</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_a3742acef86a9239f63"/>
    <language>en</language>
    <item>
      <title>Building a Mini DBaaS with Kubernetes: A DBA's Cloud-Native Engineering Journey</title>
      <dc:creator>프링글리스</dc:creator>
      <pubDate>Sun, 20 Jul 2025 08:27:46 +0000</pubDate>
      <link>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-a-dbas-cloud-native-engineering-journey-10jk</link>
      <guid>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-a-dbas-cloud-native-engineering-journey-10jk</guid>
      <description>&lt;h2&gt;
  
  
  Why Did I Build This?
&lt;/h2&gt;

&lt;p&gt;As a Database Administrator (DBA), I've always been curious about how cloud database services like AWS RDS work internally. Rather than just being a consumer of such services, I wanted to understand the engineering challenges of building a Database-as-a-Service (DBaaS) platform.&lt;/p&gt;

&lt;p&gt;I was particularly fascinated by &lt;strong&gt;AWS Aurora MySQL's fast snapshot creation and cluster restoration capabilities&lt;/strong&gt;, and wanted to implement these advanced features myself. I also wanted to build a complete DBaaS platform that supports various databases (PostgreSQL, MySQL, MariaDB) with high availability and automatic failover capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 &lt;strong&gt;Development Motivation and Goals&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js Learning&lt;/strong&gt;: Hands-on project to strengthen backend development capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Understanding&lt;/strong&gt;: Acquiring cloud-native technologies as a DBA&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora-style Feature Implementation&lt;/strong&gt;: Fast snapshots and cluster restoration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability System Construction&lt;/strong&gt;: HA clusters with automatic failover&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling Feature Implementation&lt;/strong&gt;: Dynamic resource allocation and horizontal/vertical scaling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restoration Feature Implementation&lt;/strong&gt;: AWS Aurora-style fast restoration and cross-instance restoration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Database Support&lt;/strong&gt;: Integrated management of PostgreSQL, MySQL, MariaDB&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 &lt;strong&gt;Development Tool Investment&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Initially, my development skills were limited, so I invested about $40 to purchase &lt;strong&gt;Cursor IDE&lt;/strong&gt;. This tool provides AI-based code generation and autocomplete features, which greatly helped in writing complex Kubernetes manifests and Node.js backend code. I was able to efficiently write complex YAML files like Helm chart templates and Kubernetes Operator configurations.&lt;/p&gt;

&lt;p&gt;The goal was simple: &lt;strong&gt;Build a fully functional DBaaS using Node.js and Kubernetes in just one week&lt;/strong&gt; to gain practical experience with cloud-native technologies and deepen my understanding of distributed systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenges
&lt;/h2&gt;

&lt;p&gt;Building a DBaaS involves mastering several complex components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Database Deployment Automation&lt;/strong&gt; (supporting multiple database types)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenant Isolation&lt;/strong&gt; (proper resource management)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup and Recovery Systems&lt;/strong&gt; (point-in-time recovery)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability Clustering&lt;/strong&gt; (automatic failure recovery)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Monitoring&lt;/strong&gt; and health checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora-style Snapshot System&lt;/strong&gt; (fast backup/restoration)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Major Problems I Encountered
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Complexity of Kubernetes StatefulSets
&lt;/h3&gt;

&lt;p&gt;Managing stateful databases in Kubernetes was trickier than expected. I had to learn:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Persistent Volume Claims (PVC) for data persistence&lt;/li&gt;
&lt;li&gt;CSI VolumeSnapshots for backup/recovery&lt;/li&gt;
&lt;li&gt;Proper resource allocation and limits&lt;/li&gt;
&lt;li&gt;Namespace isolation for multi-tenancy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Created custom Helm charts with appropriate StatefulSet configurations for each database type.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Multi-Database Support and High Availability Implementation
&lt;/h3&gt;

&lt;p&gt;Each database (PostgreSQL, MySQL, MariaDB) has different deployment patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt;: Zalando PostgreSQL Operator for HA clusters (✅ Success)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL/MariaDB&lt;/strong&gt;: Custom StatefulSet with monitoring exporters (❌ HA implementation failed)&lt;/li&gt;
&lt;li&gt;Different configuration requirements and connection patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Built an integrated API that abstracts database-specific differences while leveraging the advantages of each database type. For PostgreSQL, I successfully integrated the Zalando Operator, but MySQL HA cluster implementation was limited to single instances due to complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. AWS Aurora-style Backup and Recovery System
&lt;/h3&gt;

&lt;p&gt;Implementing Aurora's fast snapshot creation and cluster restoration capabilities was a core goal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CSI VolumeSnapshots for storage-level backup&lt;/li&gt;
&lt;li&gt;Cross-instance backup restoration&lt;/li&gt;
&lt;li&gt;Backup verification and testing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5-10 second snapshot creation&lt;/strong&gt; (Aurora-level performance target)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Used CSI VolumeSnapshots with hostpath-driver for fast storage-level backup that works across all database types. I actually achieved Aurora-like fast backup performance (5-10 seconds) on empty databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. High Availability Clustering (PostgreSQL vs MySQL)
&lt;/h3&gt;

&lt;p&gt;I discovered interesting differences when setting up HA clusters with automatic failure recovery:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL HA (✅ Success)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zalando PostgreSQL Operator integration&lt;/li&gt;
&lt;li&gt;Master/Replica service separation&lt;/li&gt;
&lt;li&gt;Automatic failure detection and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MySQL HA (❌ Failed)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Percona XtraDB Cluster complexity&lt;/li&gt;
&lt;li&gt;Difficulty in Group Replication setup&lt;/li&gt;
&lt;li&gt;Limitations of Operator patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: For PostgreSQL, I successfully integrated the Zalando PostgreSQL Operator for production-grade HA clusters with automatic failure recovery. MySQL is currently limited to single instances, with plans for HA implementation through MySQL Operator or Percona Operator in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;After one week of development, I had a working DBaaS platform with the following capabilities:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ &lt;strong&gt;Completed Features&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Database Support&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: PostgreSQL HA clusters with automatic failure recovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora-style Backup/Recovery&lt;/strong&gt;: CSI VolumeSnapshot-based fast snapshots (5-10 seconds)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RESTful API&lt;/strong&gt;: Complete CRUD operations for instance management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Monitoring&lt;/strong&gt;: Pod status, resource usage, health checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenant Isolation&lt;/strong&gt;: Namespace-based resource isolation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Scaling&lt;/strong&gt;: Dynamic CPU/memory allocation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚧 &lt;strong&gt;Current Limitations&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Web UI&lt;/strong&gt;: Currently CLI/API only (planned for Phase 1)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL HA&lt;/strong&gt;: Only PostgreSQL HA clusters supported (limited due to MySQL HA implementation failure)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Basic monitoring only (Prometheus/Grafana planned)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Basic authentication only (JWT/RBAC planned)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenancy&lt;/strong&gt;: Basic namespace isolation only (advanced features planned)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📊 &lt;strong&gt;Performance Metrics&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backup Creation&lt;/strong&gt;: 5-10 seconds (Aurora level, empty database basis)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Restoration&lt;/strong&gt;: Within 30 seconds (empty database basis)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Instance Deployment&lt;/strong&gt;: Within seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HA Failover&lt;/strong&gt;: Automatic detection and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Note&lt;/strong&gt;: Backup/restoration times are based on empty databases. In actual production environments, times may vary depending on data size.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Technical Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Request → Node.js API → Kubernetes → Database Instances
                ↓
        CSI VolumeSnapshots (Aurora-style backup/recovery)
                ↓
    PostgreSQL HA Cluster (Zalando Operator)
                ↓
        Real-time Monitoring and Health Checks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Technology Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Node.js + Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: Kubernetes + Helm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Databases&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: Zalando PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup/Recovery&lt;/strong&gt;: CSI VolumeSnapshots (Aurora-style)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Real-time pod/helm status tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Development Tools&lt;/strong&gt;: Cursor IDE (AI-based code generation)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Learnings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Advanced Kubernetes Learning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;StatefulSets&lt;/strong&gt; are powerful for database workloads but complex&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSI VolumeSnapshots&lt;/strong&gt; provide Aurora-level backup functionality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Namespace isolation&lt;/strong&gt; is crucial for multi-tenant environments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource quotas&lt;/strong&gt; prevent resource exhaustion&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Database Operations in Kubernetes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Helm charts&lt;/strong&gt; make database deployment much easier&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operators&lt;/strong&gt; provide production-grade database management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health checks&lt;/strong&gt; are essential for stable database operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration management through ConfigMaps&lt;/strong&gt; is elegant&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Cloud-Native Patterns
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API-first design&lt;/strong&gt; enables automation and integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven architecture&lt;/strong&gt; improves scalability&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure as Code through Helm charts&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Observability through structured logging and metrics&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Importance of Development Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor IDE's&lt;/strong&gt; AI-based code generation greatly helped with complex Kubernetes manifest writing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI tool utilization&lt;/strong&gt; significantly improved development productivity and learning speed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Appropriate tool investment&lt;/strong&gt; plays a crucial role in project success&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;My mini DBaaS can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy PostgreSQL, MySQL, MariaDB instances in seconds&lt;/li&gt;
&lt;li&gt;Provide high-availability PostgreSQL clusters with automatic failure recovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Aurora-style backups in 5-10 seconds&lt;/strong&gt; (goal achieved!, empty database basis)&lt;/li&gt;
&lt;li&gt;Restore databases within 30 seconds (empty database basis)&lt;/li&gt;
&lt;li&gt;Scale resources dynamically&lt;/li&gt;
&lt;li&gt;Monitor health and performance in real-time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Based on this experience, I created a comprehensive roadmap for future improvements:&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1 (1-2 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;React web UI for visual management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL HA Cluster Retry&lt;/strong&gt; (Percona XtraDB Operator or MySQL Operator)&lt;/li&gt;
&lt;li&gt;Prometheus + Grafana monitoring stack&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2 (3-4 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automated backup scheduling&lt;/li&gt;
&lt;li&gt;JWT-based authentication and RBAC&lt;/li&gt;
&lt;li&gt;Performance monitoring dashboard&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 3 (5-8 weeks)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Advanced multi-tenant features&lt;/li&gt;
&lt;li&gt;Security enhancements (encryption, audit logs)&lt;/li&gt;
&lt;li&gt;Cloud provider integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This project taught me that building cloud services isn't just about technology, but understanding the operational challenges that arise when managing databases at scale. As a DBA, this experience provided:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deep understanding of cloud-native architectures&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Practical experience with Kubernetes and containerization&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Insights into how cloud providers solve database challenges&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Confidence in handling complex distributed systems&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Experience with modern development methodologies using AI tools&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Check It Out
&lt;/h2&gt;

&lt;p&gt;The complete source code is available on GitHub:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/JungyeolHwang/DBaaS" rel="noopener noreferrer"&gt;https://github.com/JungyeolHwang/DBaaS&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Start
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone repository&lt;/span&gt;
git clone https://github.com/JungyeolHwang/DBaaS.git
&lt;span class="nb"&gt;cd &lt;/span&gt;DBaaS

&lt;span class="c"&gt;# Run setup script&lt;/span&gt;
./scripts/setup.sh

&lt;span class="c"&gt;# Start API server&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;backend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm start

&lt;span class="c"&gt;# Create first database&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "type": "postgresql",
    "name": "my-first-db",
    "config": {
      "password": "securepass123",
      "storage": "2Gi"
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building this mini DBaaS was an amazing learning experience. It showed that with the right tools and understanding, you can build production-ready database services even as side projects.&lt;/p&gt;

&lt;p&gt;Investing in &lt;strong&gt;Cursor IDE&lt;/strong&gt;, an AI tool, played a significant role in the project's success. I was able to efficiently write complex Kubernetes manifests and Node.js backend code, which was a great help in the early development stages.&lt;/p&gt;

&lt;p&gt;Implementing &lt;strong&gt;AWS Aurora-style fast snapshot functionality&lt;/strong&gt; was a core goal, and I achieved the target of 5-10 second backup creation using CSI VolumeSnapshots on empty databases. While actual production environments may have different backup times depending on data size, I was able to implement Aurora-like fast backup performance using storage-level snapshots. However, MySQL HA cluster implementation was more complex than expected, so currently only PostgreSQL is supported, but this is included in future improvement plans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High Availability System Construction&lt;/strong&gt; was successful for PostgreSQL through successful integration of the Zalando Operator for HA clusters with automatic failover. However, MySQL HA cluster implementation was more complex than expected, so currently only PostgreSQL is supported, but this is included in future improvement plans.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scaling features&lt;/strong&gt; and &lt;strong&gt;fast restoration features&lt;/strong&gt; were also important goals. I implemented scaling through Kubernetes' dynamic resource allocation and achieved Aurora-like fast restoration performance using CSI VolumeSnapshots.&lt;/p&gt;

&lt;p&gt;The journey from simple database management to cloud-native engineering was eye-opening. Kubernetes, Helm, and modern DevOps practices completely changed how I think about database operations.&lt;/p&gt;

&lt;p&gt;For DBAs who want to expand their skills into cloud-native engineering, I strongly recommend building something similar. Start small, focus on core features, and gradually add complexity. And consider investing in appropriate development tools if needed!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What will your next cloud-native project be?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #database #dba #cloud-native #nodejs #postgresql #mysql #mariadb #side-project #engineering #devops #cursor-ide #aws-aurora #ha-clustering
&lt;/h1&gt;




&lt;p&gt;&lt;em&gt;This project was built as a learning exercise to understand cloud-native database services. Feel free to contribute, fork, or use as inspiration for your own projects!&lt;/em&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes로 Mini DBaaS 구축하기: DBA의 클라우드 네이티브 엔지니어링 도전기</title>
      <dc:creator>프링글리스</dc:creator>
      <pubDate>Sun, 20 Jul 2025 08:25:42 +0000</pubDate>
      <link>https://dev.to/_a3742acef86a9239f63/kubernetesro-mini-dbaas-gucughagi-dbayi-keulraudeu-neitibeu-enjinieoring-dojeongi-2kfa</link>
      <guid>https://dev.to/_a3742acef86a9239f63/kubernetesro-mini-dbaas-gucughagi-dbayi-keulraudeu-neitibeu-enjinieoring-dojeongi-2kfa</guid>
      <description>&lt;h2&gt;
  
  
  왜 만들게 되었나요?
&lt;/h2&gt;

&lt;p&gt;데이터베이스 관리자(DBA)로서 항상 AWS RDS 같은 클라우드 데이터베이스 서비스가 내부적으로 어떻게 동작하는지 궁금했습니다. 단순히 이런 서비스의 소비자가 되는 것이 아니라, Database-as-a-Service(DBaaS) 플랫폼을 구축하는 엔지니어링 도전과제를 이해하고 싶었습니다.&lt;/p&gt;

&lt;p&gt;특히 &lt;strong&gt;AWS Aurora MySQL의 빠른 스냅샷 생성 및 클러스터 복원 기능&lt;/strong&gt;에 매료되어, 이런 고급 기능들을 직접 구현해보고 싶었습니다. 또한 다양한 데이터베이스(PostgreSQL, MySQL, MariaDB)를 지원하면서 고가용성과 자동 페일오버 기능까지 갖춘 완전한 DBaaS 플랫폼을 만들어보고 싶었습니다.&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 &lt;strong&gt;개발 동기와 목표&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js 학습&lt;/strong&gt;: 백엔드 개발 역량 강화를 위한 실전 프로젝트&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes 이해&lt;/strong&gt;: DBA로서 클라우드 네이티브 기술 습득&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora 스타일 기능 구현&lt;/strong&gt;: 빠른 스냅샷과 클러스터 복원&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;고가용성 시스템 구축&lt;/strong&gt;: 자동 페일오버가 포함된 HA 클러스터&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;스케일링 기능 구현&lt;/strong&gt;: 동적 리소스 할당 및 수평/수직 확장&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;복원 기능 구현&lt;/strong&gt;: AWS Aurora 스타일의 빠른 복원 및 크로스 인스턴스 복원&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;다중 데이터베이스 지원&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB 통합 관리&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  💡 &lt;strong&gt;개발 도구 투자&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;처음에는 개발 실력이 부족했기 때문에 &lt;strong&gt;Cursor IDE&lt;/strong&gt;를 약 40달러 투자하여 구매했습니다. 이 도구는 AI 기반 코드 생성과 자동완성 기능을 제공하여, 복잡한 Kubernetes 매니페스트와 Node.js 백엔드 코드 작성에 큰 도움이 되었습니다. 특히 Helm 차트 템플릿과 Kubernetes Operator 설정 같은 복잡한 YAML 파일들을 효율적으로 작성할 수 있었습니다.&lt;/p&gt;

&lt;p&gt;목표는 간단했습니다: &lt;strong&gt;Node.js와 Kubernetes를 사용해서 1주일 만에 완전히 동작하는 DBaaS를 구축&lt;/strong&gt;하여 클라우드 네이티브 기술에 대한 실무 경험을 쌓고 분산 시스템에 대한 이해를 깊이하는 것이었습니다.&lt;/p&gt;

&lt;h2&gt;
  
  
  도전 과제
&lt;/h2&gt;

&lt;p&gt;DBaaS를 구축하는 것은 제가 마스터해야 할 여러 복잡한 컴포넌트가 포함되어 있습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;데이터베이스 배포 자동화&lt;/strong&gt; (여러 데이터베이스 타입 지원)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;멀티 테넌트 격리&lt;/strong&gt; (적절한 리소스 관리)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;백업 및 복구 시스템&lt;/strong&gt; (포인트 인 타임 복구)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;고가용성 클러스터링&lt;/strong&gt; (자동 장애 복구)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;실시간 모니터링&lt;/strong&gt; 및 헬스 체크&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora 스타일 스냅샷 시스템&lt;/strong&gt; (빠른 백업/복원)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  마주친 주요 문제들
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Kubernetes StatefulSet의 복잡성
&lt;/h3&gt;

&lt;p&gt;Kubernetes에서 상태 유지 데이터베이스를 관리하는 것은 예상보다 까다로웠습니다. 다음을 배워야 했습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;데이터 지속성을 위한 Persistent Volume Claims (PVC)&lt;/li&gt;
&lt;li&gt;백업/복구를 위한 CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;적절한 리소스 할당 및 제한&lt;/li&gt;
&lt;li&gt;멀티 테넌시를 위한 네임스페이스 격리&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;해결책&lt;/strong&gt;: 각 데이터베이스 타입별로 적절한 StatefulSet 구성이 포함된 커스텀 Helm 차트를 생성했습니다.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. 다중 데이터베이스 지원과 고가용성 구현
&lt;/h3&gt;

&lt;p&gt;각 데이터베이스(PostgreSQL, MySQL, MariaDB)는 서로 다른 배포 패턴을 가집니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt;: HA 클러스터를 위한 Zalando PostgreSQL Operator (✅ 성공)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL/MariaDB&lt;/strong&gt;: 모니터링 익스포터가 포함된 커스텀 StatefulSet (❌ HA 구현 실패)&lt;/li&gt;
&lt;li&gt;서로 다른 설정 요구사항과 연결 패턴&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;해결책&lt;/strong&gt;: 데이터베이스별 차이점을 추상화하면서 각 데이터베이스 타입의 장점을 활용하는 통합 API를 구축했습니다. PostgreSQL의 경우 Zalando Operator를 성공적으로 통합했지만, MySQL HA 클러스터 구현은 복잡성으로 인해 단일 인스턴스로 제한했습니다.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. AWS Aurora 스타일 백업 및 복구 시스템
&lt;/h3&gt;

&lt;p&gt;Aurora의 빠른 스냅샷 생성과 클러스터 복원 기능을 구현하는 것이 핵심 목표였습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;스토리지 레벨 백업을 위한 CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;크로스 인스턴스 백업 복원&lt;/li&gt;
&lt;li&gt;백업 검증 및 테스트&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5-10초 내 스냅샷 생성&lt;/strong&gt; (Aurora 수준의 성능 목표)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;해결책&lt;/strong&gt;: 모든 데이터베이스 타입에서 작동하는 빠른 스토리지 레벨 백업을 위해 hostpath-driver가 포함된 CSI VolumeSnapshots를 사용했습니다. 빈 데이터베이스 기준으로 Aurora와 유사한 수준의 빠른 백업 성능(5-10초)을 달성할 수 있었습니다.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. 고가용성 클러스터링 (PostgreSQL vs MySQL)
&lt;/h3&gt;

&lt;p&gt;자동 장애 복구가 포함된 HA 클러스터 설정에서 흥미로운 차이점을 발견했습니다:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostgreSQL HA (✅ 성공)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zalando PostgreSQL Operator 통합&lt;/li&gt;
&lt;li&gt;Master/Replica 서비스 분리&lt;/li&gt;
&lt;li&gt;자동 장애 감지 및 복구&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MySQL HA (❌ 실패)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Percona XtraDB Cluster 복잡성&lt;/li&gt;
&lt;li&gt;Group Replication 설정의 어려움&lt;/li&gt;
&lt;li&gt;Operator 패턴의 한계&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;해결책&lt;/strong&gt;: PostgreSQL의 경우 자동 장애 복구가 포함된 프로덕션급 HA 클러스터를 위해 Zalando PostgreSQL Operator를 성공적으로 통합했습니다. MySQL은 현재 단일 인스턴스로 제한하고, 향후 MySQL Operator나 Percona Operator를 통한 HA 구현을 계획하고 있습니다.&lt;/p&gt;

&lt;h2&gt;
  
  
  무엇을 만들었나요
&lt;/h2&gt;

&lt;p&gt;1주일의 개발 후, 다음과 같은 동작하는 DBaaS 플랫폼을 갖게 되었습니다:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ &lt;strong&gt;완성된 기능들&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;다중 데이터베이스 지원&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB 인스턴스&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;고가용성&lt;/strong&gt;: 자동 장애 복구가 포함된 PostgreSQL HA 클러스터&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Aurora 스타일 백업/복구&lt;/strong&gt;: CSI VolumeSnapshot 기반 빠른 스냅샷 (5-10초)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RESTful API&lt;/strong&gt;: 인스턴스 관리를 위한 완전한 CRUD 작업&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;실시간 모니터링&lt;/strong&gt;: Pod 상태, 리소스 사용량, 헬스 체크&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;멀티 테넌트 격리&lt;/strong&gt;: 네임스페이스 기반 리소스 격리&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;리소스 스케일링&lt;/strong&gt;: 동적 CPU/메모리 할당&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚧 &lt;strong&gt;현재 한계점들&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;웹 UI 없음&lt;/strong&gt;: 현재 CLI/API만 지원 (Phase 1에서 계획)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL HA&lt;/strong&gt;: PostgreSQL HA 클러스터만 지원 (MySQL HA 구현 실패로 인한 제한)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;모니터링&lt;/strong&gt;: 기본 모니터링만 (Prometheus/Grafana 계획)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;보안&lt;/strong&gt;: 기본 인증만 (JWT/RBAC 계획)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;멀티 테넌시&lt;/strong&gt;: 기본 네임스페이스 격리만 (고급 기능 계획)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  📊 &lt;strong&gt;성능 지표&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;백업 생성&lt;/strong&gt;: 5-10초 (Aurora 수준, 빈 데이터베이스 기준)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;데이터베이스 복원&lt;/strong&gt;: 30초 내 (빈 데이터베이스 기준)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;인스턴스 배포&lt;/strong&gt;: 몇 초 내&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HA 페일오버&lt;/strong&gt;: 자동 감지 및 복구&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;참고&lt;/strong&gt;: 백업/복원 시간은 빈 데이터베이스 기준입니다. 실제 운영 환경에서는 데이터 크기에 따라 시간이 달라질 수 있습니다.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  기술적 아키텍처
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;사용자 요청 → Node.js API → Kubernetes → 데이터베이스 인스턴스
                ↓
        CSI VolumeSnapshots (Aurora 스타일 백업/복구)
                ↓
    PostgreSQL HA 클러스터 (Zalando Operator)
                ↓
        실시간 모니터링 및 헬스 체크
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  기술 스택
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;백엔드&lt;/strong&gt;: Node.js + Express&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;오케스트레이션&lt;/strong&gt;: Kubernetes + Helm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;데이터베이스&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;고가용성&lt;/strong&gt;: Zalando PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;백업/복구&lt;/strong&gt;: CSI VolumeSnapshots (Aurora 스타일)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;모니터링&lt;/strong&gt;: 실시간 pod/helm 상태 추적&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;개발 도구&lt;/strong&gt;: Cursor IDE (AI 기반 코드 생성)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  주요 학습 내용
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Kubernetes 심화 학습
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;StatefulSet&lt;/strong&gt;은 데이터베이스 워크로드에 강력하지만 복잡함&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSI VolumeSnapshots&lt;/strong&gt;은 Aurora 수준의 백업 기능 제공&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;네임스페이스 격리&lt;/strong&gt;는 멀티 테넌트 환경에 중요&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;리소스 할당량&lt;/strong&gt;은 리소스 고갈을 방지&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Kubernetes에서의 데이터베이스 운영
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Helm 차트&lt;/strong&gt;는 데이터베이스 배포를 훨씬 쉽게 만듦&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operator&lt;/strong&gt;는 프로덕션급 데이터베이스 관리 제공&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;헬스 체크&lt;/strong&gt;는 안정적인 데이터베이스 운영에 필수&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ConfigMap을 통한 설정 관리&lt;/strong&gt;는 우아함&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. 클라우드 네이티브 패턴
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API 우선 설계&lt;/strong&gt;는 자동화와 통합을 가능하게 함&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;이벤트 기반 아키텍처&lt;/strong&gt;는 확장성 향상&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Helm 차트를 통한 Infrastructure as Code&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;구조화된 로깅과 메트릭을 통한 관찰성&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. 개발 도구의 중요성
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor IDE&lt;/strong&gt;의 AI 기반 코드 생성이 복잡한 Kubernetes 매니페스트 작성에 큰 도움&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI 도구 활용&lt;/strong&gt;이 개발 생산성과 학습 속도를 크게 향상시킴&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;적절한 도구 투자&lt;/strong&gt;가 프로젝트 성공에 중요한 역할&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  결과
&lt;/h2&gt;

&lt;p&gt;제 미니 DBaaS는 이제 다음을 할 수 있습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL, MySQL, MariaDB 인스턴스를 몇 초 만에 배포&lt;/li&gt;
&lt;li&gt;자동 장애 복구가 포함된 고가용성 PostgreSQL 클러스터 제공&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;5-10초 내에 Aurora 스타일 백업 생성&lt;/strong&gt; (목표 달성!, 빈 데이터베이스 기준)&lt;/li&gt;
&lt;li&gt;30초 내에 데이터베이스 복원 (빈 데이터베이스 기준)&lt;/li&gt;
&lt;li&gt;리소스를 동적으로 스케일링&lt;/li&gt;
&lt;li&gt;실시간으로 헬스와 성능 모니터링&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  다음 단계
&lt;/h2&gt;

&lt;p&gt;이 경험을 바탕으로 향후 개선을 위한 포괄적인 로드맵을 만들었습니다:&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1 (1-2주)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;시각적 관리를 위한 React 웹 UI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MySQL HA 클러스터 재도전&lt;/strong&gt; (Percona XtraDB Operator 또는 MySQL Operator)&lt;/li&gt;
&lt;li&gt;Prometheus + Grafana 모니터링 스택&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2 (3-4주)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;자동화된 백업 스케줄링&lt;/li&gt;
&lt;li&gt;JWT 기반 인증 및 RBAC&lt;/li&gt;
&lt;li&gt;성능 모니터링 대시보드&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 3 (5-8주)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;고급 멀티 테넌트 기능&lt;/li&gt;
&lt;li&gt;보안 강화 (암호화, 감사 로그)&lt;/li&gt;
&lt;li&gt;클라우드 프로바이더 통합&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  왜 이것이 중요한가
&lt;/h2&gt;

&lt;p&gt;이 프로젝트는 클라우드 서비스를 구축하는 것이 단순히 기술에 관한 것이 아니라, 대규모로 데이터베이스를 관리할 때 발생하는 운영상의 도전과제를 이해하는 것에 관한 것임을 가르쳐주었습니다. DBA로서 이 경험은 다음과 같은 것을 제공했습니다:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;클라우드 네이티브 아키텍처에 대한 깊은 이해&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Kubernetes와 컨테이너화에 대한 실무 경험&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;클라우드 프로바이더가 데이터베이스 도전과제를 해결하는 방법에 대한 통찰&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;복잡한 분산 시스템을 다루는 자신감&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI 도구를 활용한 현대적 개발 방법론 경험&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  확인해보세요
&lt;/h2&gt;

&lt;p&gt;전체 소스코드는 GitHub에서 확인할 수 있습니다:&lt;br&gt;
&lt;strong&gt;&lt;a href="https://github.com/JungyeolHwang/DBaaS" rel="noopener noreferrer"&gt;https://github.com/JungyeolHwang/DBaaS&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  빠른 시작
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 저장소 클론&lt;/span&gt;
git clone https://github.com/JungyeolHwang/DBaaS.git
&lt;span class="nb"&gt;cd &lt;/span&gt;DBaaS

&lt;span class="c"&gt;# 설정 스크립트 실행&lt;/span&gt;
./scripts/setup.sh

&lt;span class="c"&gt;# API 서버 시작&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;backend &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm start

&lt;span class="c"&gt;# 첫 번째 데이터베이스 생성&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "type": "postgresql",
    "name": "my-first-db",
    "config": {
      "password": "securepass123",
      "storage": "2Gi"
    }
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  결론
&lt;/h2&gt;

&lt;p&gt;이 미니 DBaaS를 구축하는 것은 놀라운 학습 경험이었습니다. 올바른 도구와 이해를 가지고 있다면 사이드 프로젝트로도 프로덕션 준비가 된 데이터베이스 서비스를 구축할 수 있다는 것을 보여주었습니다.&lt;/p&gt;

&lt;p&gt;특히 &lt;strong&gt;Cursor IDE&lt;/strong&gt;라는 AI 도구에 투자한 것이 프로젝트 성공에 큰 역할을 했습니다. 복잡한 Kubernetes 매니페스트와 Node.js 백엔드 코드를 효율적으로 작성할 수 있었고, 이는 개발 초기 단계에서 큰 도움이 되었습니다.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Aurora 스타일의 빠른 스냅샷 기능&lt;/strong&gt;을 구현하는 것이 핵심 목표였는데, CSI VolumeSnapshots를 통해 빈 데이터베이스 기준으로 5-10초 내 백업 생성이라는 목표를 달성할 수 있었습니다. 실제 운영 환경에서는 데이터 크기에 따라 백업 시간이 달라질 수 있지만, 스토리지 레벨 스냅샷을 사용하여 Aurora와 유사한 빠른 백업 성능을 구현할 수 있었습니다.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;고가용성 시스템 구축&lt;/strong&gt;에서는 PostgreSQL의 경우 Zalando Operator를 성공적으로 통합하여 자동 페일오버가 포함된 HA 클러스터를 구현할 수 있었습니다. 다만 MySQL HA 클러스터 구현은 예상보다 복잡하여 현재는 PostgreSQL만 지원하고 있지만, 이는 향후 개선 계획에 포함되어 있습니다.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;스케일링 기능&lt;/strong&gt;과 &lt;strong&gt;빠른 복원 기능&lt;/strong&gt;도 중요한 목표였는데, Kubernetes의 동적 리소스 할당을 통해 스케일링을 구현했고, CSI VolumeSnapshots를 활용하여 AWS Aurora와 유사한 빠른 복원 성능을 달성할 수 있었습니다.&lt;/p&gt;

&lt;p&gt;단순한 데이터베이스 관리에서 클라우드 네이티브 엔지니어링으로의 여정은 눈을 뜨게 하는 경험이었습니다. Kubernetes, Helm, 현대적인 DevOps 관행들이 제가 데이터베이스 운영에 대해 생각하는 방식을 완전히 바꿨습니다.&lt;/p&gt;

&lt;p&gt;클라우드 네이티브 엔지니어링으로 스킬을 확장하고 싶은 DBA라면, 비슷한 것을 구축해보는 것을 강력히 추천합니다. 작게 시작하고, 핵심 기능에 집중하고, 점진적으로 복잡성을 추가하세요. 그리고 필요하다면 적절한 개발 도구에 투자하는 것도 고려해보세요!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;당신의 다음 클라우드 네이티브 프로젝트는 무엇이 될까요?&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  태그
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #database #dba #cloud-native #nodejs #postgresql #mysql #mariadb #side-project #engineering #devops #cursor-ide #aws-aurora #ha-clustering
&lt;/h1&gt;




&lt;p&gt;&lt;em&gt;이 프로젝트는 클라우드 네이티브 데이터베이스 서비스를 이해하기 위한 학습 연습으로 구축되었습니다. 기여, 포크, 또는 자신의 프로젝트에 영감으로 사용하세요!&lt;/em&gt; &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Mini DBaaS with Kubernetes in One Week - Part 3: Kubernetes Integration &amp; Helm Charts</title>
      <dc:creator>프링글리스</dc:creator>
      <pubDate>Sun, 20 Jul 2025 08:00:48 +0000</pubDate>
      <link>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-3-kubernetes-integration-helm-charts-3h05</link>
      <guid>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-3-kubernetes-integration-helm-charts-3h05</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome to Part 3! In the previous parts, we set up our environment and created a basic API server. Today, we'll integrate Kubernetes functionality into our Node.js application and create our first Helm charts for database deployment.&lt;/p&gt;

&lt;p&gt;By the end of this post, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes client integration in Node.js&lt;/li&gt;
&lt;li&gt;Custom Helm charts for PostgreSQL, MySQL, and MariaDB&lt;/li&gt;
&lt;li&gt;Basic database instance deployment functionality&lt;/li&gt;
&lt;li&gt;Error handling for Kubernetes operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What We'll Build Today
&lt;/h2&gt;

&lt;p&gt;We'll create a system that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy database instances using Helm charts&lt;/li&gt;
&lt;li&gt;Monitor deployment status in real-time&lt;/li&gt;
&lt;li&gt;Handle Kubernetes resource management&lt;/li&gt;
&lt;li&gt;Provide proper error handling and logging&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Adding Kubernetes Dependencies
&lt;/h2&gt;

&lt;p&gt;First, let's add the necessary dependencies to our &lt;code&gt;package.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "mini-dbaas-backend",
  "version": "1.0.0",
  "description": "Mini DBaaS API Server",
  "main": "index.js",
  "scripts": {
    "start": "node index.js",
    "dev": "nodemon index.js",
    "test": "jest"
  },
  "dependencies": {
    "express": "^4.18.2",
    "cors": "^2.8.5",
    "helmet": "^7.1.0",
    "dotenv": "^16.3.1",
    "winston": "^3.11.0",
    "joi": "^17.11.0",
    "@kubernetes/client-node": "^0.20.0",
    "yaml": "^2.3.4",
    "uuid": "^9.0.1"
  },
  "devDependencies": {
    "nodemon": "^3.0.2",
    "jest": "^29.7.0"
  },
  "keywords": ["kubernetes", "database", "dbaas", "nodejs"],
  "author": "Your Name",
  "license": "MIT"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the new dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;backend
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Creating Kubernetes Service
&lt;/h2&gt;

&lt;p&gt;Let's create a service to handle all Kubernetes operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const k8s = require('@kubernetes/client-node');
const { exec } = require('child_process');
const { promisify } = require('util');
const logger = require('../utils/logger');
const ResponseUtil = require('../utils/response');

const execAsync = promisify(exec);

class KubernetesService {
  constructor() {
    this.kc = new k8s.KubeConfig();
    this.kc.loadFromDefault();

    this.k8sApi = this.kc.makeApiClient(k8s.CoreV1Api);
    this.appsV1Api = this.kc.makeApiClient(k8s.AppsV1Api);
    this.storageV1Api = this.kc.makeApiClient(k8s.StorageV1Api);
  }

  // Check cluster connectivity
  async checkCluster() {
    try {
      const response = await this.k8sApi.listNamespace();
      logger.info(`Connected to Kubernetes cluster. Found ${response.body.items.length} namespaces`);
      return ResponseUtil.success({ 
        connected: true, 
        namespaces: response.body.items.length 
      });
    } catch (error) {
      logger.error('Failed to connect to Kubernetes cluster', error);
      return ResponseUtil.error('Failed to connect to Kubernetes cluster', 500, error.message);
    }
  }

  // Create namespace
  async createNamespace(name) {
    try {
      const namespace = {
        apiVersion: 'v1',
        kind: 'Namespace',
        metadata: {
          name: name,
          labels: {
            'app': 'mini-dbaas',
            'managed-by': 'mini-dbaas-api'
          }
        }
      };

      await this.k8sApi.createNamespace(namespace);
      logger.info(`Created namespace: ${name}`);
      return ResponseUtil.success({ namespace: name });
    } catch (error) {
      if (error.statusCode === 409) {
        logger.info(`Namespace ${name} already exists`);
        return ResponseUtil.success({ namespace: name, exists: true });
      }
      logger.error(`Failed to create namespace ${name}`, error);
      return ResponseUtil.error(`Failed to create namespace ${name}`, 500, error.message);
    }
  }

  // Delete namespace
  async deleteNamespace(name) {
    try {
      await this.k8sApi.deleteNamespace(name);
      logger.info(`Deleted namespace: ${name}`);
      return ResponseUtil.success({ namespace: name, deleted: true });
    } catch (error) {
      logger.error(`Failed to delete namespace ${name}`, error);
      return ResponseUtil.error(`Failed to delete namespace ${name}`, 500, error.message);
    }
  }

  // Get pod status
  async getPodStatus(namespace, podName) {
    try {
      const response = await this.k8sApi.readNamespacedPod(podName, namespace);
      const pod = response.body;

      return ResponseUtil.success({
        name: pod.metadata.name,
        namespace: pod.metadata.namespace,
        status: pod.status.phase,
        ready: pod.status.containerStatuses?.[0]?.ready || false,
        restartCount: pod.status.containerStatuses?.[0]?.restartCount || 0,
        image: pod.status.containerStatuses?.[0]?.image,
        createdAt: pod.metadata.creationTimestamp
      });
    } catch (error) {
      logger.error(`Failed to get pod status for ${podName} in ${namespace}`, error);
      return ResponseUtil.error(`Failed to get pod status`, 500, error.message);
    }
  }

  // Get all pods in namespace
  async getPodsInNamespace(namespace) {
    try {
      const response = await this.k8sApi.listNamespacedPod(namespace);
      const pods = response.body.items.map(pod =&amp;gt; ({
        name: pod.metadata.name,
        status: pod.status.phase,
        ready: pod.status.containerStatuses?.[0]?.ready || false,
        restartCount: pod.status.containerStatuses?.[0]?.restartCount || 0,
        image: pod.status.containerStatuses?.[0]?.image,
        createdAt: pod.metadata.creationTimestamp
      }));

      return ResponseUtil.success({ pods, count: pods.length });
    } catch (error) {
      logger.error(`Failed to get pods in namespace ${namespace}`, error);
      return ResponseUtil.error(`Failed to get pods`, 500, error.message);
    }
  }

  // Get PVC status
  async getPVCStatus(namespace, pvcName) {
    try {
      const response = await this.k8sApi.readNamespacedPersistentVolumeClaim(pvcName, namespace);
      const pvc = response.body;

      return ResponseUtil.success({
        name: pvc.metadata.name,
        namespace: pvc.metadata.namespace,
        status: pvc.status.phase,
        capacity: pvc.status.capacity?.storage,
        accessModes: pvc.status.accessModes,
        createdAt: pvc.metadata.creationTimestamp
      });
    } catch (error) {
      logger.error(`Failed to get PVC status for ${pvcName} in ${namespace}`, error);
      return ResponseUtil.error(`Failed to get PVC status`, 500, error.message);
    }
  }

  // Execute kubectl command
  async executeKubectl(command) {
    try {
      const { stdout, stderr } = await execAsync(`kubectl ${command}`);
      if (stderr) {
        logger.warn(`kubectl stderr: ${stderr}`);
      }
      return ResponseUtil.success({ output: stdout, command });
    } catch (error) {
      logger.error(`kubectl command failed: ${command}`, error);
      return ResponseUtil.error(`kubectl command failed`, 500, error.message);
    }
  }

  // Execute helm command
  async executeHelm(command) {
    try {
      const { stdout, stderr } = await execAsync(`helm ${command}`);
      if (stderr) {
        logger.warn(`helm stderr: ${stderr}`);
      }
      return ResponseUtil.success({ output: stdout, command });
    } catch (error) {
      logger.error(`helm command failed: ${command}`, error);
      return ResponseUtil.error(`helm command failed`, 500, error.message);
    }
  }
}

module.exports = new KubernetesService();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Creating Helm Chart Service
&lt;/h2&gt;

&lt;p&gt;Now let's create a service to manage Helm chart operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { exec } = require('child_process');
const { promisify } = require('util');
const fs = require('fs').promises;
const path = require('path');
const logger = require('../utils/logger');
const ResponseUtil = require('../utils/response');
const k8sService = require('./k8s');

const execAsync = promisify(exec);

class HelmService {
  constructor() {
    this.chartsPath = path.join(__dirname, '../../helm-charts');
  }

  // Deploy database using Helm
  async deployDatabase(namespace, instanceName, dbType, config) {
    try {
      // Create namespace if it doesn't exist
      await k8sService.createNamespace(namespace);

      // Prepare Helm values
      const values = this.prepareHelmValues(dbType, config);
      const valuesFile = path.join(this.chartsPath, `${dbType}-local`, 'values.yaml`);

      // Write values to file
      await fs.writeFile(valuesFile, JSON.stringify(values, null, 2));

      // Deploy using Helm
      const helmCommand = `install ${instanceName} ${this.chartsPath}/${dbType}-local --namespace ${namespace} --values ${valuesFile}`;
      const result = await k8sService.executeHelm(helmCommand);

      if (result.success) {
        logger.info(`Successfully deployed ${dbType} instance: ${instanceName} in namespace: ${namespace}`);
        return ResponseUtil.success({
          instanceName,
          namespace,
          dbType,
          status: 'deploying',
          helmOutput: result.data.output
        });
      } else {
        throw new Error(result.message);
      }
    } catch (error) {
      logger.error(`Failed to deploy ${dbType} instance: ${instanceName}`, error);
      return ResponseUtil.error(`Failed to deploy database instance`, 500, error.message);
    }
  }

  // Delete database using Helm
  async deleteDatabase(namespace, instanceName) {
    try {
      const helmCommand = `uninstall ${instanceName} --namespace ${namespace}`;
      const result = await k8sService.executeHelm(helmCommand);

      if (result.success) {
        logger.info(`Successfully deleted instance: ${instanceName} from namespace: ${namespace}`);
        return ResponseUtil.success({
          instanceName,
          namespace,
          status: 'deleted',
          helmOutput: result.data.output
        });
      } else {
        throw new Error(result.message);
      }
    } catch (error) {
      logger.error(`Failed to delete instance: ${instanceName}`, error);
      return ResponseUtil.error(`Failed to delete database instance`, 500, error.message);
    }
  }

  // Get Helm release status
  async getReleaseStatus(namespace, releaseName) {
    try {
      const helmCommand = `status ${releaseName} --namespace ${namespace} --output json`;
      const result = await k8sService.executeHelm(helmCommand);

      if (result.success) {
        const status = JSON.parse(result.data.output);
        return ResponseUtil.success({
          name: releaseName,
          namespace,
          status: status.info?.status,
          revision: status.version,
          lastDeployed: status.info?.last_deployed,
          description: status.info?.description
        });
      } else {
        throw new Error(result.message);
      }
    } catch (error) {
      logger.error(`Failed to get release status for: ${releaseName}`, error);
      return ResponseUtil.error(`Failed to get release status`, 500, error.message);
    }
  }

  // List all Helm releases
  async listReleases(namespace = null) {
    try {
      const helmCommand = namespace 
        ? `list --namespace ${namespace} --output json`
        : `list --all-namespaces --output json`;

      const result = await k8sService.executeHelm(helmCommand);

      if (result.success) {
        const releases = JSON.parse(result.data.output);
        return ResponseUtil.success({
          releases: releases.map(release =&amp;gt; ({
            name: release.name,
            namespace: release.namespace,
            status: release.status,
            revision: release.revision,
            lastDeployed: release.updated
          })),
          count: releases.length
        });
      } else {
        throw new Error(result.message);
      }
    } catch (error) {
      logger.error('Failed to list Helm releases', error);
      return ResponseUtil.error('Failed to list Helm releases', 500, error.message);
    }
  }

  // Prepare Helm values based on database type and config
  prepareHelmValues(dbType, config) {
    const baseValues = {
      global: {
        storageClass: "standard"
      },
      persistence: {
        enabled: true,
        size: config.storage || "1Gi"
      },
      resources: {
        requests: {
          memory: config.memory || "256Mi",
          cpu: config.cpu || "250m"
        },
        limits: {
          memory: config.memoryLimit || "512Mi",
          cpu: config.cpuLimit || "500m"
        }
      }
    };

    switch (dbType) {
      case 'postgresql':
        return {
          ...baseValues,
          auth: {
            postgresPassword: config.password,
            database: config.database || "postgres"
          },
          primary: {
            persistence: {
              enabled: true,
              size: config.storage || "1Gi"
            }
          }
        };

      case 'mysql':
        return {
          ...baseValues,
          auth: {
            rootPassword: config.password,
            database: config.database || "mysql"
          },
          primary: {
            persistence: {
              enabled: true,
              size: config.storage || "1Gi"
            }
          }
        };

      case 'mariadb':
        return {
          ...baseValues,
          auth: {
            rootPassword: config.password,
            database: config.database || "mariadb"
          },
          primary: {
            persistence: {
              enabled: true,
              size: config.storage || "1Gi"
            }
          }
        };

      default:
        throw new Error(`Unsupported database type: ${dbType}`);
    }
  }
}

module.exports = new HelmService();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Creating Database Instance Controller
&lt;/h2&gt;

&lt;p&gt;Let's create a controller to handle database instance operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const helmService = require('../services/helm');
const k8sService = require('../services/k8s');
const logger = require('../utils/logger');
const ResponseUtil = require('../utils/response');

class InstanceController {
  // Create a new database instance
  async createInstance(req, res) {
    try {
      const { type, name, config } = req.body;
      const namespace = `dbaas-${name}`;

      logger.info(`Creating ${type} instance: ${name} in namespace: ${namespace}`);

      // Validate required fields
      if (!type || !name || !config) {
        return res.status(400).json(
          ResponseUtil.error('Missing required fields: type, name, config', 400)
        );
      }

      // Deploy database using Helm
      const result = await helmService.deployDatabase(namespace, name, type, config);

      if (result.success) {
        res.status(201).json(result);
      } else {
        res.status(500).json(result);
      }
    } catch (error) {
      logger.error('Error creating instance', error);
      res.status(500).json(
        ResponseUtil.error('Failed to create instance', 500, error.message)
      );
    }
  }

  // Get all instances
  async getInstances(req, res) {
    try {
      logger.info('Fetching all instances');

      const result = await helmService.listReleases();

      if (result.success) {
        // Filter only our DBaaS instances
        const dbaasInstances = result.data.releases.filter(release =&amp;gt; 
          release.namespace.startsWith('dbaas-')
        );

        res.json(ResponseUtil.success({
          instances: dbaasInstances,
          count: dbaasInstances.length
        }));
      } else {
        res.status(500).json(result);
      }
    } catch (error) {
      logger.error('Error fetching instances', error);
      res.status(500).json(
        ResponseUtil.error('Failed to fetch instances', 500, error.message)
      );
    }
  }

  // Get specific instance
  async getInstance(req, res) {
    try {
      const { name } = req.params;
      const namespace = `dbaas-${name}`;

      logger.info(`Fetching instance: ${name}`);

      // Get Helm release status
      const releaseResult = await helmService.getReleaseStatus(namespace, name);

      if (!releaseResult.success) {
        return res.status(404).json(
          ResponseUtil.error(`Instance ${name} not found`, 404)
        );
      }

      // Get pod status
      const podsResult = await k8sService.getPodsInNamespace(namespace);

      res.json(ResponseUtil.success({
        ...releaseResult.data,
        pods: podsResult.success ? podsResult.data.pods : []
      }));
    } catch (error) {
      logger.error(`Error fetching instance: ${req.params.name}`, error);
      res.status(500).json(
        ResponseUtil.error('Failed to fetch instance', 500, error.message)
      );
    }
  }

  // Delete instance
  async deleteInstance(req, res) {
    try {
      const { name } = req.params;
      const namespace = `dbaas-${name}`;

      logger.info(`Deleting instance: ${name}`);

      // Delete Helm release
      const result = await helmService.deleteDatabase(namespace, name);

      if (result.success) {
        // Delete namespace after a delay to ensure cleanup
        setTimeout(async () =&amp;gt; {
          await k8sService.deleteNamespace(namespace);
        }, 5000);

        res.json(result);
      } else {
        res.status(500).json(result);
      }
    } catch (error) {
      logger.error(`Error deleting instance: ${req.params.name}`, error);
      res.status(500).json(
        ResponseUtil.error('Failed to delete instance', 500, error.message)
      );
    }
  }

  // Get instance connection info
  async getConnectionInfo(req, res) {
    try {
      const { name } = req.params;
      const namespace = `dbaas-${name}`;

      logger.info(`Fetching connection info for instance: ${name}`);

      // Get service info
      const serviceResult = await k8sService.executeKubectl(
        `get svc -n ${namespace} -o json`
      );

      if (!serviceResult.success) {
        return res.status(404).json(
          ResponseUtil.error(`Instance ${name} not found`, 404)
        );
      }

      const services = JSON.parse(serviceResult.data.output);
      const dbService = services.items.find(svc =&amp;gt; 
        svc.metadata.name.includes(name) &amp;amp;&amp;amp; svc.spec.ports
      );

      if (!dbService) {
        return res.status(404).json(
          ResponseUtil.error(`Service not found for instance ${name}`, 404)
        );
      }

      const connectionInfo = {
        host: `${dbService.metadata.name}.${namespace}.svc.cluster.local`,
        port: dbService.spec.ports[0].port,
        service: dbService.metadata.name,
        namespace: namespace
      };

      res.json(ResponseUtil.success(connectionInfo));
    } catch (error) {
      logger.error(`Error fetching connection info for: ${req.params.name}`, error);
      res.status(500).json(
        ResponseUtil.error('Failed to fetch connection info', 500, error.message)
      );
    }
  }
}

module.exports = new InstanceController();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Creating Routes
&lt;/h2&gt;

&lt;p&gt;Now let's create the routes for our API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const router = express.Router();
const InstanceController = require('../controllers/InstanceController');
const { validate, schemas } = require('../middleware/validation');

// Create new instance
router.post('/', validate(schemas.instance), InstanceController.createInstance);

// Get all instances
router.get('/', InstanceController.getInstances);

// Get specific instance
router.get('/:name', InstanceController.getInstance);

// Get instance connection info
router.get('/:name/connection', InstanceController.getConnectionInfo);

// Delete instance
router.delete('/:name', InstanceController.deleteInstance);

module.exports = router;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Updating Main Server File
&lt;/h2&gt;

&lt;p&gt;Let's update our main server file to include the new routes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
require('dotenv').config();

const app = express();
const PORT = process.env.PORT || 3000;

// Import routes
const instancesRouter = require('./routes/instances');

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Basic logging middleware
app.use((req, res, next) =&amp;gt; {
  console.log(`${new Date().toISOString()} - ${req.method} ${req.path}`);
  next();
});

// Health check endpoint
app.get('/health', (req, res) =&amp;gt; {
  res.json({
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
    environment: process.env.NODE_ENV
  });
});

// API information endpoint
app.get('/', (req, res) =&amp;gt; {
  res.json({
    name: 'Mini DBaaS API',
    version: '1.0.0',
    description: 'Database as a Service API built with Node.js and Kubernetes',
    endpoints: {
      health: '/health',
      instances: '/instances',
      'ha-clusters': '/ha-clusters'
    }
  });
});

// Routes
app.use('/instances', instancesRouter);

// Error handling middleware
app.use((err, req, res, next) =&amp;gt; {
  console.error(err.stack);
  res.status(500).json({
    error: 'Something went wrong!',
    message: process.env.NODE_ENV === 'development' ? err.message : 'Internal server error'
  });
});

// 404 handler
app.use('*', (req, res) =&amp;gt; {
  res.status(404).json({
    error: 'Endpoint not found',
    path: req.originalUrl
  });
});

// Start server
app.listen(PORT, () =&amp;gt; {
  console.log(`🚀 Mini DBaaS API server running on port ${PORT}`);
  console.log(`📊 Health check: http://localhost:${PORT}/health`);
  console.log(`📚 API docs: http://localhost:${PORT}/`);
  console.log(`🗄️  Instances API: http://localhost:${PORT}/instances`);
});

module.exports = app;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Creating Basic Helm Charts
&lt;/h2&gt;

&lt;p&gt;Let's create basic Helm charts for our databases. First, let's create the PostgreSQL chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v2
name: postgresql-local
description: A Helm chart for PostgreSQL database instances
type: application
version: 0.1.0
appVersion: "15.0"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Default values for postgresql-local
global:
  storageClass: "standard"

auth:
  postgresPassword: "postgres"
  database: "postgres"

primary:
  persistence:
    enabled: true
    size: "1Gi"
  resources:
    requests:
      memory: "256Mi"
      cpu: "250m"
    limits:
      memory: "512Mi"
      cpu: "500m"

service:
  type: ClusterIP
  port: 5432
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: {{ include "postgresql-local.fullname" . }}
  labels:
    {{- include "postgresql-local.labels" . | nindent 4 }}
spec:
  serviceName: {{ include "postgresql-local.fullname" . }}
  replicas: 1
  selector:
    matchLabels:
      {{- include "postgresql-local.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "postgresql-local.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: postgresql
        image: postgres:15
        ports:
        - containerPort: 5432
          name: postgresql
        env:
        - name: POSTGRES_PASSWORD
          value: {{ .Values.auth.postgresPassword }}
        - name: POSTGRES_DB
          value: {{ .Values.auth.database }}
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
        resources:
          {{- toYaml .Values.primary.resources | nindent 10 }}
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: {{ .Values.global.storageClass }}
      resources:
        requests:
          storage: {{ .Values.primary.persistence.size }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: {{ include "postgresql-local.fullname" . }}
  labels:
    {{- include "postgresql-local.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: postgresql
      protocol: TCP
      name: postgresql
  selector:
    {{- include "postgresql-local.selectorLabels" . | nindent 4 }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{/*
Expand the name of the chart.
*/}}
{{- define "postgresql-local.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
*/}}
{{- define "postgresql-local.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "postgresql-local.labels" -}}
helm.sh/chart: {{ include "postgresql-local.chart" . }}
{{ include "postgresql-local.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "postgresql-local.selectorLabels" -}}
app.kubernetes.io/name: {{ include "postgresql-local.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "postgresql-local.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Testing Our Kubernetes Integration
&lt;/h2&gt;

&lt;p&gt;Let's create a test script to verify our setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

echo "🧪 Testing Kubernetes Integration"

# Test 1: Check cluster connectivity
echo "🔗 Testing cluster connectivity..."
curl -s http://localhost:3000/health | grep -q "healthy" &amp;amp;&amp;amp; echo "✅ Health check passed" || echo "❌ Health check failed"

# Test 2: Create PostgreSQL instance
echo "🗄️  Creating PostgreSQL instance..."
CREATE_RESPONSE=$(curl -s -X POST http://localhost:3000/instances \
  -H "Content-Type: application/json" \
  -d '{
    "type": "postgresql",
    "name": "test-postgres",
    "config": {
      "password": "testpass123",
      "storage": "1Gi",
      "database": "testdb"
    }
  }')

echo "Create response: $CREATE_RESPONSE"

# Test 3: List instances
echo "📋 Listing instances..."
LIST_RESPONSE=$(curl -s http://localhost:3000/instances)
echo "List response: $LIST_RESPONSE"

# Test 4: Get instance status
echo "📊 Getting instance status..."
sleep 10
STATUS_RESPONSE=$(curl -s http://localhost:3000/instances/test-postgres)
echo "Status response: $STATUS_RESPONSE"

# Test 5: Get connection info
echo "🔌 Getting connection info..."
CONNECTION_RESPONSE=$(curl -s http://localhost:3000/instances/test-postgres/connection)
echo "Connection response: $CONNECTION_RESPONSE"

echo "🎉 Kubernetes integration test completed!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make it executable and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x scripts/test-k8s-integration.sh
./scripts/test-k8s-integration.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What We've Accomplished
&lt;/h2&gt;

&lt;p&gt;Today we've successfully:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Integrated Kubernetes client&lt;/strong&gt; into our Node.js application&lt;br&gt;
✅ &lt;strong&gt;Created Helm service&lt;/strong&gt; for database deployment management&lt;br&gt;
✅ &lt;strong&gt;Built instance controller&lt;/strong&gt; with full CRUD operations&lt;br&gt;
✅ &lt;strong&gt;Implemented proper error handling&lt;/strong&gt; for Kubernetes operations&lt;br&gt;
✅ &lt;strong&gt;Created basic Helm charts&lt;/strong&gt; for PostgreSQL deployment&lt;br&gt;
✅ &lt;strong&gt;Added comprehensive logging&lt;/strong&gt; for debugging&lt;br&gt;
✅ &lt;strong&gt;Built RESTful API endpoints&lt;/strong&gt; for instance management&lt;br&gt;
✅ &lt;strong&gt;Implemented validation middleware&lt;/strong&gt; for API requests&lt;/p&gt;
&lt;h2&gt;
  
  
  Testing the API
&lt;/h2&gt;

&lt;p&gt;Now you can test your API with these commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a PostgreSQL instance&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/instances &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "type": "postgresql",
    "name": "my-postgres",
    "config": {
      "password": "securepass123",
      "storage": "2Gi",
      "database": "myapp"
    }
  }'&lt;/span&gt;

&lt;span class="c"&gt;# List all instances&lt;/span&gt;
curl http://localhost:3000/instances

&lt;span class="c"&gt;# Get specific instance status&lt;/span&gt;
curl http://localhost:3000/instances/my-postgres

&lt;span class="c"&gt;# Get connection information&lt;/span&gt;
curl http://localhost:3000/instances/my-postgres/connection

&lt;span class="c"&gt;# Delete instance&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; DELETE http://localhost:3000/instances/my-postgres
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;In Part 4, we'll enhance our database instance management with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MySQL and MariaDB Helm charts&lt;/li&gt;
&lt;li&gt;Advanced monitoring and status tracking&lt;/li&gt;
&lt;li&gt;Connection pooling and optimization&lt;/li&gt;
&lt;li&gt;Backup and recovery preparation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Common Issues
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Kubernetes client connection issues&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check if kubectl is working&lt;/span&gt;
kubectl get nodes

&lt;span class="c"&gt;# Verify kubeconfig&lt;/span&gt;
kubectl config view
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Helm chart deployment failures&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check Helm chart syntax&lt;/span&gt;
helm lint helm-charts/postgresql-local

&lt;span class="c"&gt;# Test chart installation&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;test-postgres helm-charts/postgresql-local &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Pod startup issues&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check pod logs&lt;/span&gt;
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &amp;lt;namespace&amp;gt;

&lt;span class="c"&gt;# Check pod events&lt;/span&gt;
kubectl describe pod &amp;lt;pod-name&amp;gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &amp;lt;namespace&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We now have a fully functional Kubernetes-integrated API server that can deploy and manage database instances! Our system can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deploy PostgreSQL instances using custom Helm charts&lt;/li&gt;
&lt;li&gt;Monitor deployment status in real-time&lt;/li&gt;
&lt;li&gt;Provide connection information for applications&lt;/li&gt;
&lt;li&gt;Handle proper cleanup and resource management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next part, we'll add MySQL and MariaDB support, and implement more advanced features. Get ready to see your first database running in Kubernetes! 🚀&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Series Navigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="//../part1-architecture-overview"&gt;Part 1: Architecture Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="//../part2-environment-setup"&gt;Part 2: Environment Setup &amp;amp; Basic API Server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 3: Kubernetes Integration &amp;amp; Helm Charts&lt;/strong&gt; (this post)&lt;/li&gt;
&lt;li&gt;Part 4: Database Instance Management&lt;/li&gt;
&lt;li&gt;Part 5: Backup &amp;amp; Recovery with CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;Part 6: High Availability with PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;Part 7: Multi-Tenant Features &amp;amp; Final Testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #helm #nodejs #postgresql #api #deployment #tutorial #series #docker #minikube
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Building a Mini DBaaS with Kubernetes in One Week - Part 2: Environment Setup &amp; Basic API Server</title>
      <dc:creator>프링글리스</dc:creator>
      <pubDate>Sun, 20 Jul 2025 07:54:28 +0000</pubDate>
      <link>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-2-environment-setup-basic-api-server-4djg</link>
      <guid>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-2-environment-setup-basic-api-server-4djg</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Welcome back! In Part 1, we discussed the architecture and planning for our mini DBaaS platform. Today, we'll get our hands dirty and set up the development environment, then create the foundation of our Node.js API server.&lt;/p&gt;

&lt;p&gt;By the end of this post, you'll have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A working Kubernetes cluster with minikube&lt;/li&gt;
&lt;li&gt;A basic Node.js API server with Express&lt;/li&gt;
&lt;li&gt;Initial project structure following best practices&lt;/li&gt;
&lt;li&gt;Basic health check and routing setup&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites Check
&lt;/h2&gt;

&lt;p&gt;Before we start, let's make sure you have all the required tools installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check Docker&lt;/span&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# Check Node.js (v18+)&lt;/span&gt;
node &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# Check kubectl&lt;/span&gt;
kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt;

&lt;span class="c"&gt;# Check Helm&lt;/span&gt;
helm version

&lt;span class="c"&gt;# Check minikube&lt;/span&gt;
minikube version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If any of these are missing, install them first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Desktop&lt;/strong&gt;: &lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Download here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt;: &lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Download here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt;: &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;Installation guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt;: &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;Installation guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;minikube&lt;/strong&gt;: &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;Installation guide&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Setting Up Kubernetes Environment
&lt;/h2&gt;

&lt;p&gt;Let's start by setting up our local Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start minikube with adequate resources&lt;/span&gt;
minikube start &lt;span class="nt"&gt;--cpus&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="nt"&gt;--memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8192 &lt;span class="nt"&gt;--disk-size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;20g

&lt;span class="c"&gt;# Enable necessary addons&lt;/span&gt;
minikube addons &lt;span class="nb"&gt;enable &lt;/span&gt;csi-hostpath-driver
minikube addons &lt;span class="nb"&gt;enable &lt;/span&gt;volumesnapshots

&lt;span class="c"&gt;# Verify cluster is running&lt;/span&gt;
kubectl cluster-info
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verify CSI and VolumeSnapshot Support
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check CSI drivers&lt;/span&gt;
kubectl get csidriver

&lt;span class="c"&gt;# Check VolumeSnapshot classes&lt;/span&gt;
kubectl get volumesnapshotclass

&lt;span class="c"&gt;# You should see:&lt;/span&gt;
&lt;span class="c"&gt;# NAME                    DRIVER                DELETIONPOLICY   AGE&lt;/span&gt;
&lt;span class="c"&gt;# csi-hostpath-snapclass  hostpath.csi.k8s.io   Delete           1m&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Setting Up Helm Repositories
&lt;/h2&gt;

&lt;p&gt;We'll use Bitnami charts for our databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add Bitnami repository&lt;/span&gt;
helm repo add bitnami https://charts.bitnami.com/bitnami

&lt;span class="c"&gt;# Add Zalando PostgreSQL Operator repository&lt;/span&gt;
helm repo add zalando https://opensource.zalando.com/postgres-operator/charts/postgres-operator

&lt;span class="c"&gt;# Update repositories&lt;/span&gt;
helm repo update

&lt;span class="c"&gt;# Verify repositories&lt;/span&gt;
helm repo list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Creating Project Structure
&lt;/h2&gt;

&lt;p&gt;Let's create a well-organized project structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create project directory&lt;/span&gt;
&lt;span class="nb"&gt;mkdir &lt;/span&gt;mini-dbaas &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;mini-dbaas

&lt;span class="c"&gt;# Create backend structure&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; backend/&lt;span class="o"&gt;{&lt;/span&gt;controllers,routes,services,middleware,utils&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; helm-charts/&lt;span class="o"&gt;{&lt;/span&gt;postgresql-local,mysql-local,mariadb-local&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; k8s/operators
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; scripts

&lt;span class="c"&gt;# Create initial files&lt;/span&gt;
&lt;span class="nb"&gt;touch &lt;/span&gt;backend/package.json
&lt;span class="nb"&gt;touch &lt;/span&gt;backend/index.js
&lt;span class="nb"&gt;touch &lt;/span&gt;backend/.env.example
&lt;span class="nb"&gt;touch &lt;/span&gt;backend/.env
&lt;span class="nb"&gt;touch &lt;/span&gt;README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your project structure should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mini-dbaas/
├── backend/
│   ├── controllers/
│   ├── routes/
│   ├── services/
│   ├── middleware/
│   ├── utils/
│   ├── package.json
│   ├── index.js
│   ├── .env.example
│   └── .env
├── helm-charts/
│   ├── postgresql-local/
│   ├── mysql-local/
│   └── mariadb-local/
├── k8s/
│   └── operators/
├── scripts/
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Setting Up Node.js Backend
&lt;/h2&gt;

&lt;p&gt;Let's create our Node.js API server:&lt;/p&gt;

&lt;h3&gt;
  
  
  Package.json Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "mini-dbaas-backend",
  "version": "1.0.0",
  "description": "Mini DBaaS API Server",
  "main": "index.js",
  "scripts": {
    "start": "node index.js",
    "dev": "nodemon index.js",
    "test": "jest"
  },
  "dependencies": {
    "express": "^4.18.2",
    "cors": "^2.8.5",
    "helmet": "^7.1.0",
    "dotenv": "^16.3.1",
    "winston": "^3.11.0",
    "joi": "^17.11.0"
  },
  "devDependencies": {
    "nodemon": "^3.0.2",
    "jest": "^29.7.0"
  },
  "keywords": ["kubernetes", "database", "dbaas", "nodejs"],
  "author": "Your Name",
  "license": "MIT"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment Configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Server Configuration
PORT=3000
NODE_ENV=development

# Kubernetes Configuration
KUBECONFIG_PATH=~/.kube/config

# Database Configuration
METADATA_DB_PATH=./data/metadata.db

# Logging
LOG_LEVEL=info

# Security
JWT_SECRET=your-secret-key-here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Main Server File
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
require('dotenv').config();

const app = express();
const PORT = process.env.PORT || 3000;

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Basic logging middleware
app.use((req, res, next) =&amp;gt; {
  console.log(`${new Date().toISOString()} - ${req.method} ${req.path}`);
  next();
});

// Health check endpoint
app.get('/health', (req, res) =&amp;gt; {
  res.json({
    status: 'healthy',
    timestamp: new Date().toISOString(),
    uptime: process.uptime(),
    environment: process.env.NODE_ENV
  });
});

// API information endpoint
app.get('/', (req, res) =&amp;gt; {
  res.json({
    name: 'Mini DBaaS API',
    version: '1.0.0',
    description: 'Database as a Service API built with Node.js and Kubernetes',
    endpoints: {
      health: '/health',
      instances: '/instances',
      'ha-clusters': '/ha-clusters'
    }
  });
});

// Error handling middleware
app.use((err, req, res, next) =&amp;gt; {
  console.error(err.stack);
  res.status(500).json({
    error: 'Something went wrong!',
    message: process.env.NODE_ENV === 'development' ? err.message : 'Internal server error'
  });
});

// 404 handler
app.use('*', (req, res) =&amp;gt; {
  res.status(404).json({
    error: 'Endpoint not found',
    path: req.originalUrl
  });
});

// Start server
app.listen(PORT, () =&amp;gt; {
  console.log(`🚀 Mini DBaaS API server running on port ${PORT}`);
  console.log(`📊 Health check: http://localhost:${PORT}/health`);
  console.log(`📚 API docs: http://localhost:${PORT}/`);
});

module.exports = app;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Installing Dependencies
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;backend
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Testing Our Basic Setup
&lt;/h2&gt;

&lt;p&gt;Let's test our basic API server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start the server&lt;/span&gt;
npm start

&lt;span class="c"&gt;# In another terminal, test the endpoints&lt;/span&gt;
curl http://localhost:3000/health
curl http://localhost:3000/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see responses like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/health&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"healthy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-01-27T10:30:00.000Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"uptime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;5.123&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"environment"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"development"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;GET&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Mini DBaaS API"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Database as a Service API built with Node.js and Kubernetes"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"endpoints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"health"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/health"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"instances"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/instances"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"ha-clusters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"/ha-clusters"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Creating Utility Functions
&lt;/h2&gt;

&lt;p&gt;Let's create some utility functions that we'll use throughout the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ResponseUtil {
  static success(data = null, message = 'Success') {
    return {
      success: true,
      message,
      data,
      timestamp: new Date().toISOString()
    };
  }

  static error(message = 'Error occurred', statusCode = 500, details = null) {
    return {
      success: false,
      message,
      statusCode,
      details,
      timestamp: new Date().toISOString()
    };
  }

  static paginated(data, page, limit, total) {
    return {
      success: true,
      data,
      pagination: {
        page: parseInt(page),
        limit: parseInt(limit),
        total,
        pages: Math.ceil(total / limit)
      },
      timestamp: new Date().toISOString()
    };
  }
}

module.exports = ResponseUtil;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const winston = require('winston');

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.errors({ stack: true }),
    winston.format.json()
  ),
  defaultMeta: { service: 'mini-dbaas-api' },
  transports: [
    new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
    new winston.transports.File({ filename: 'logs/combined.log' })
  ]
});

if (process.env.NODE_ENV !== 'production') {
  logger.add(new winston.transports.Console({
    format: winston.format.simple()
  }));
}

module.exports = logger;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Creating Basic Middleware
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const Joi = require('joi');

const validate = (schema) =&amp;gt; {
  return (req, res, next) =&amp;gt; {
    const { error } = schema.validate(req.body);
    if (error) {
      return res.status(400).json({
        success: false,
        message: 'Validation error',
        details: error.details.map(detail =&amp;gt; detail.message)
      });
    }
    next();
  };
};

// Common validation schemas
const schemas = {
  instance: Joi.object({
    type: Joi.string().valid('postgresql', 'mysql', 'mariadb').required(),
    name: Joi.string().alphanum().min(3).max(50).required(),
    config: Joi.object({
      password: Joi.string().min(8).required(),
      storage: Joi.string().pattern(/^\d+[KMGTPEZYkmgtpezy]i?$/).required(),
      database: Joi.string().alphanum().optional(),
      memory: Joi.string().pattern(/^\d+[KMGTPEZYkmgtpezy]i?$/).optional(),
      cpu: Joi.string().pattern(/^\d+m?$/).optional()
    }).required()
  })
};

module.exports = { validate, schemas };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9: Testing Everything Together
&lt;/h2&gt;

&lt;p&gt;Let's create a simple test script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

echo "🧪 Testing Mini DBaaS Basic Setup"

# Test 1: Check if server starts
echo "📡 Testing server startup..."
cd backend
npm start &amp;amp;
SERVER_PID=$!
sleep 3

# Test 2: Health check
echo "💓 Testing health endpoint..."
HEALTH_RESPONSE=$(curl -s http://localhost:3000/health)
if echo "$HEALTH_RESPONSE" | grep -q "healthy"; then
    echo "✅ Health check passed"
else
    echo "❌ Health check failed"
    echo "Response: $HEALTH_RESPONSE"
fi

# Test 3: API info
echo "📚 Testing API info endpoint..."
API_RESPONSE=$(curl -s http://localhost:3000/)
if echo "$API_RESPONSE" | grep -q "Mini DBaaS API"; then
    echo "✅ API info endpoint passed"
else
    echo "❌ API info endpoint failed"
    echo "Response: $API_RESPONSE"
fi

# Cleanup
kill $SERVER_PID
echo "🎉 Basic setup test completed!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make it executable and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x scripts/test-basic.sh
./scripts/test-basic.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What We've Accomplished
&lt;/h2&gt;

&lt;p&gt;Today we've successfully:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Set up Kubernetes environment&lt;/strong&gt; with minikube and necessary addons&lt;br&gt;
✅ &lt;strong&gt;Configured Helm repositories&lt;/strong&gt; for database charts&lt;br&gt;
✅ &lt;strong&gt;Created project structure&lt;/strong&gt; following best practices&lt;br&gt;
✅ &lt;strong&gt;Built basic Node.js API server&lt;/strong&gt; with Express&lt;br&gt;
✅ &lt;strong&gt;Implemented health checks&lt;/strong&gt; and basic routing&lt;br&gt;
✅ &lt;strong&gt;Added utility functions&lt;/strong&gt; for consistent responses&lt;br&gt;
✅ &lt;strong&gt;Created validation middleware&lt;/strong&gt; for API requests&lt;br&gt;
✅ &lt;strong&gt;Set up logging&lt;/strong&gt; with Winston&lt;br&gt;
✅ &lt;strong&gt;Tested the basic setup&lt;/strong&gt; with automated scripts&lt;/p&gt;
&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;In Part 3, we'll integrate Kubernetes functionality into our API server and create our first Helm charts for database deployment. We'll learn about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes client integration&lt;/li&gt;
&lt;li&gt;Helm chart creation and deployment&lt;/li&gt;
&lt;li&gt;Basic database instance management&lt;/li&gt;
&lt;li&gt;Error handling for Kubernetes operations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Common Issues
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. minikube won't start&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Check Docker is running&lt;/span&gt;
docker ps

&lt;span class="c"&gt;# Reset minikube if needed&lt;/span&gt;
minikube delete
minikube start &lt;span class="nt"&gt;--cpus&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="nt"&gt;--memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;8192
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Port 3000 already in use&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Find and kill the process&lt;/span&gt;
lsof &lt;span class="nt"&gt;-ti&lt;/span&gt;:3000 | xargs &lt;span class="nb"&gt;kill&lt;/span&gt; &lt;span class="nt"&gt;-9&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Helm repository issues&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clear and re-add repositories&lt;/span&gt;
helm repo remove bitnami
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We now have a solid foundation for our mini DBaaS platform! Our API server is running, our Kubernetes environment is ready, and we have a clean project structure to build upon.&lt;/p&gt;

&lt;p&gt;In the next part, we'll dive into Kubernetes integration and start deploying actual database instances. Get ready to see your first PostgreSQL instance running in Kubernetes! 🚀&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Series Navigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="//../part1-architecture-overview"&gt;Part 1: Architecture Overview&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Part 2: Environment Setup &amp;amp; Basic API Server&lt;/strong&gt; (this post)&lt;/li&gt;
&lt;li&gt;Part 3: Kubernetes Integration &amp;amp; Helm Charts&lt;/li&gt;
&lt;li&gt;Part 4: Database Instance Management&lt;/li&gt;
&lt;li&gt;Part 5: Backup &amp;amp; Recovery with CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;Part 6: High Availability with PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;Part 7: Multi-Tenant Features &amp;amp; Final Testing&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #nodejs #express #api #development #tutorial #series #docker #minikube #helm
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Building a Mini DBaaS with Kubernetes in One Week - Part 1: Architecture Overview</title>
      <dc:creator>프링글리스</dc:creator>
      <pubDate>Sun, 20 Jul 2025 07:48:54 +0000</pubDate>
      <link>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-1-architecture-overview-3eka</link>
      <guid>https://dev.to/_a3742acef86a9239f63/building-a-mini-dbaas-with-kubernetes-in-one-week-part-1-architecture-overview-3eka</guid>
      <description>&lt;h1&gt;
  
  
  Building a Mini DBaaS with Kubernetes in One Week - Part 1: Architecture Overview
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Ever wondered how cloud database services like AWS RDS or Google Cloud SQL work under the hood? What if you could build your own Database-as-a-Service (DBaaS) platform using Kubernetes? In this series, I'll show you how to create a fully functional mini DBaaS platform in just one week using Node.js and Kubernetes.&lt;/p&gt;

&lt;p&gt;By the end of this series, you'll have a working DBaaS that can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create and manage PostgreSQL, MySQL, and MariaDB instances&lt;/li&gt;
&lt;li&gt;Provide high-availability PostgreSQL clusters with automatic failover&lt;/li&gt;
&lt;li&gt;Offer Aurora-style backup and recovery using CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;Support multi-tenant isolation with namespaces&lt;/li&gt;
&lt;li&gt;Monitor database health and performance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Build Your Own DBaaS?
&lt;/h2&gt;

&lt;p&gt;Building a DBaaS might seem like overkill for small projects, but it's an excellent way to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Learn Kubernetes deeply&lt;/strong&gt;: Understand StatefulSets, PVCs, Operators, and more&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Master cloud-native patterns&lt;/strong&gt;: Experience real-world distributed systems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gain DevOps skills&lt;/strong&gt;: Practice infrastructure as code with Helm charts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Understand database operations&lt;/strong&gt;: Learn about backup strategies, high availability, and monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Our Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Our mini DBaaS follows a clean, layered architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────┐
│         User Requests (CLI/API)     │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│      Node.js API Server (Express)   │
│  • Instance CRUD operations         │
│  • Helm chart deployment            │
│  • Kubernetes resource management   │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│        Kubernetes (minikube)        │
│  ┌────────────────────────────────┐ │
│  │  Namespace per instance        │ │
│  │  StatefulSet + PVC (DB Pod)    │ │
│  │  Secret, ConfigMap             │ │
│  └────────────────────────────────┘ │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│    Local Storage (PVC, HostPath)    │
│  • MySQL/MariaDB/PostgreSQL data    │ │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│    CSI VolumeSnapshot Backup        │
│  • Aurora-style point-in-time       │ │
│  • 5-10 second snapshot creation    │ │
│  • 30-second recovery time          │ │
└──────────────┬──────────────────────┘
               ↓
┌─────────────────────────────────────┐
│  PostgreSQL HA (Zalando Operator)   │
│  • Automatic failover               │ │
│  • Read/write load balancing        │ │
│  • 3-5 node clusters                │ │
└─────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Technology Stack
&lt;/h2&gt;

&lt;p&gt;We'll use a modern, cloud-native stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Backend&lt;/strong&gt;: Node.js with Express (lightweight and fast)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: Kubernetes with Helm (industry standard)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Databases&lt;/strong&gt;: PostgreSQL, MySQL, MariaDB via Bitnami charts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: Zalando PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backup/Recovery&lt;/strong&gt;: CSI VolumeSnapshots (like AWS Aurora)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: PVC with hostPath (minikube) / cloud storage (production)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Real-time pod/helm status tracking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features We'll Implement
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Multi-Database Support&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL, MySQL, and MariaDB instances&lt;/li&gt;
&lt;li&gt;Custom Helm charts for each database type&lt;/li&gt;
&lt;li&gt;Automatic configuration and secret management&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;High Availability Clusters&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PostgreSQL HA using Zalando PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;Automatic failover and load balancing&lt;/li&gt;
&lt;li&gt;Master/Replica service separation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Aurora-Style Backup System&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CSI VolumeSnapshot for storage-level backups&lt;/li&gt;
&lt;li&gt;Point-in-time recovery&lt;/li&gt;
&lt;li&gt;Cross-instance backup restoration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Multi-Tenant Isolation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Namespace-based resource isolation&lt;/li&gt;
&lt;li&gt;Independent storage volumes per tenant&lt;/li&gt;
&lt;li&gt;Resource quotas and limits&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;RESTful API&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Complete CRUD operations for instances&lt;/li&gt;
&lt;li&gt;Real-time status monitoring&lt;/li&gt;
&lt;li&gt;Connection information retrieval&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Development Timeline
&lt;/h2&gt;

&lt;p&gt;Here's our one-week development plan:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Day&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Deliverables&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Environment Setup &amp;amp; Basic API&lt;/td&gt;
&lt;td&gt;Node.js server, basic routes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Kubernetes Integration&lt;/td&gt;
&lt;td&gt;Helm charts, basic deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Database Instance Management&lt;/td&gt;
&lt;td&gt;Create/delete/status APIs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Backup &amp;amp; Recovery System&lt;/td&gt;
&lt;td&gt;CSI VolumeSnapshots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High Availability Clusters&lt;/td&gt;
&lt;td&gt;Zalando PostgreSQL Operator&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Multi-Tenant Features&lt;/td&gt;
&lt;td&gt;Namespace isolation, quotas&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Day 7&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Testing &amp;amp; Documentation&lt;/td&gt;
&lt;td&gt;API testing, deployment guide&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What You'll Learn
&lt;/h2&gt;

&lt;p&gt;Throughout this series, you'll gain hands-on experience with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes StatefulSets&lt;/strong&gt;: Managing stateful applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume Claims&lt;/strong&gt;: Database storage management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm Charts&lt;/strong&gt;: Package and deploy applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Operators&lt;/strong&gt;: Advanced application management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CSI VolumeSnapshots&lt;/strong&gt;: Storage-level backup strategies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-tenancy&lt;/strong&gt;: Resource isolation and management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Design&lt;/strong&gt;: RESTful service architecture&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker Desktop&lt;/strong&gt; installed and running&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;minikube&lt;/strong&gt; for local Kubernetes cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm&lt;/strong&gt; for package management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt; (v18+) for the API server&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt; for Kubernetes interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;In the next post, we'll set up our development environment and create the basic Node.js API server structure. We'll start with the foundation and gradually build up to a fully functional DBaaS platform.&lt;/p&gt;

&lt;p&gt;Are you ready to build your own cloud database service? Let's dive in! 🚀&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Series Navigation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Part 1: Architecture Overview (this post)&lt;/li&gt;
&lt;li&gt;Part 2: Environment Setup &amp;amp; Basic API Server&lt;/li&gt;
&lt;li&gt;Part 3: Kubernetes Integration &amp;amp; Helm Charts&lt;/li&gt;
&lt;li&gt;Part 4: Database Instance Management&lt;/li&gt;
&lt;li&gt;Part 5: Backup &amp;amp; Recovery with CSI VolumeSnapshots&lt;/li&gt;
&lt;li&gt;Part 6: High Availability with PostgreSQL Operator&lt;/li&gt;
&lt;li&gt;Part 7: Multi-Tenant Features &amp;amp; Final Testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Follow along and build your own mini DBaaS platform!&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Tags
&lt;/h2&gt;

&lt;h1&gt;
  
  
  kubernetes #database #devops #nodejs #helm #postgresql #mysql #mariadb #cloud-native #tutorial #series
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
