Introduction
Welcome back! In Part 1, we discussed the architecture and planning for our mini DBaaS platform. Today, we'll get our hands dirty and set up the development environment, then create the foundation of our Node.js API server.
By the end of this post, you'll have:
- A working Kubernetes cluster with minikube
- A basic Node.js API server with Express
- Initial project structure following best practices
- Basic health check and routing setup
Prerequisites Check
Before we start, let's make sure you have all the required tools installed:
# Check Docker
docker --version
# Check Node.js (v18+)
node --version
# Check kubectl
kubectl version --client
# Check Helm
helm version
# Check minikube
minikube version
If any of these are missing, install them first:
- Docker Desktop: Download here
- Node.js: Download here
- kubectl: Installation guide
- Helm: Installation guide
- minikube: Installation guide
Step 1: Setting Up Kubernetes Environment
Let's start by setting up our local Kubernetes cluster:
# Start minikube with adequate resources
minikube start --cpus=4 --memory=8192 --disk-size=20g
# Enable necessary addons
minikube addons enable csi-hostpath-driver
minikube addons enable volumesnapshots
# Verify cluster is running
kubectl cluster-info
kubectl get nodes
Verify CSI and VolumeSnapshot Support
# Check CSI drivers
kubectl get csidriver
# Check VolumeSnapshot classes
kubectl get volumesnapshotclass
# You should see:
# NAME DRIVER DELETIONPOLICY AGE
# csi-hostpath-snapclass hostpath.csi.k8s.io Delete 1m
Step 2: Setting Up Helm Repositories
We'll use Bitnami charts for our databases:
# Add Bitnami repository
helm repo add bitnami https://charts.bitnami.com/bitnami
# Add Zalando PostgreSQL Operator repository
helm repo add zalando https://opensource.zalando.com/postgres-operator/charts/postgres-operator
# Update repositories
helm repo update
# Verify repositories
helm repo list
Step 3: Creating Project Structure
Let's create a well-organized project structure:
# Create project directory
mkdir mini-dbaas && cd mini-dbaas
# Create backend structure
mkdir -p backend/{controllers,routes,services,middleware,utils}
mkdir -p helm-charts/{postgresql-local,mysql-local,mariadb-local}
mkdir -p k8s/operators
mkdir -p scripts
# Create initial files
touch backend/package.json
touch backend/index.js
touch backend/.env.example
touch backend/.env
touch README.md
Your project structure should look like this:
mini-dbaas/
├── backend/
│ ├── controllers/
│ ├── routes/
│ ├── services/
│ ├── middleware/
│ ├── utils/
│ ├── package.json
│ ├── index.js
│ ├── .env.example
│ └── .env
├── helm-charts/
│ ├── postgresql-local/
│ ├── mysql-local/
│ └── mariadb-local/
├── k8s/
│ └── operators/
├── scripts/
└── README.md
Step 4: Setting Up Node.js Backend
Let's create our Node.js API server:
Package.json Setup
{
"name": "mini-dbaas-backend",
"version": "1.0.0",
"description": "Mini DBaaS API Server",
"main": "index.js",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js",
"test": "jest"
},
"dependencies": {
"express": "^4.18.2",
"cors": "^2.8.5",
"helmet": "^7.1.0",
"dotenv": "^16.3.1",
"winston": "^3.11.0",
"joi": "^17.11.0"
},
"devDependencies": {
"nodemon": "^3.0.2",
"jest": "^29.7.0"
},
"keywords": ["kubernetes", "database", "dbaas", "nodejs"],
"author": "Your Name",
"license": "MIT"
}
Environment Configuration
# Server Configuration
PORT=3000
NODE_ENV=development
# Kubernetes Configuration
KUBECONFIG_PATH=~/.kube/config
# Database Configuration
METADATA_DB_PATH=./data/metadata.db
# Logging
LOG_LEVEL=info
# Security
JWT_SECRET=your-secret-key-here
Main Server File
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');
require('dotenv').config();
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
// Basic logging middleware
app.use((req, res, next) => {
console.log(`${new Date().toISOString()} - ${req.method} ${req.path}`);
next();
});
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
uptime: process.uptime(),
environment: process.env.NODE_ENV
});
});
// API information endpoint
app.get('/', (req, res) => {
res.json({
name: 'Mini DBaaS API',
version: '1.0.0',
description: 'Database as a Service API built with Node.js and Kubernetes',
endpoints: {
health: '/health',
instances: '/instances',
'ha-clusters': '/ha-clusters'
}
});
});
// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({
error: 'Something went wrong!',
message: process.env.NODE_ENV === 'development' ? err.message : 'Internal server error'
});
});
// 404 handler
app.use('*', (req, res) => {
res.status(404).json({
error: 'Endpoint not found',
path: req.originalUrl
});
});
// Start server
app.listen(PORT, () => {
console.log(`🚀 Mini DBaaS API server running on port ${PORT}`);
console.log(`📊 Health check: http://localhost:${PORT}/health`);
console.log(`📚 API docs: http://localhost:${PORT}/`);
});
module.exports = app;
Step 5: Installing Dependencies
cd backend
npm install
Step 6: Testing Our Basic Setup
Let's test our basic API server:
# Start the server
npm start
# In another terminal, test the endpoints
curl http://localhost:3000/health
curl http://localhost:3000/
You should see responses like:
// GET /health
{
"status": "healthy",
"timestamp": "2025-01-27T10:30:00.000Z",
"uptime": 5.123,
"environment": "development"
}
// GET /
{
"name": "Mini DBaaS API",
"version": "1.0.0",
"description": "Database as a Service API built with Node.js and Kubernetes",
"endpoints": {
"health": "/health",
"instances": "/instances",
"ha-clusters": "/ha-clusters"
}
}
Step 7: Creating Utility Functions
Let's create some utility functions that we'll use throughout the project:
class ResponseUtil {
static success(data = null, message = 'Success') {
return {
success: true,
message,
data,
timestamp: new Date().toISOString()
};
}
static error(message = 'Error occurred', statusCode = 500, details = null) {
return {
success: false,
message,
statusCode,
details,
timestamp: new Date().toISOString()
};
}
static paginated(data, page, limit, total) {
return {
success: true,
data,
pagination: {
page: parseInt(page),
limit: parseInt(limit),
total,
pages: Math.ceil(total / limit)
},
timestamp: new Date().toISOString()
};
}
}
module.exports = ResponseUtil;
const winston = require('winston');
const logger = winston.createLogger({
level: process.env.LOG_LEVEL || 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: { service: 'mini-dbaas-api' },
transports: [
new winston.transports.File({ filename: 'logs/error.log', level: 'error' }),
new winston.transports.File({ filename: 'logs/combined.log' })
]
});
if (process.env.NODE_ENV !== 'production') {
logger.add(new winston.transports.Console({
format: winston.format.simple()
}));
}
module.exports = logger;
Step 8: Creating Basic Middleware
const Joi = require('joi');
const validate = (schema) => {
return (req, res, next) => {
const { error } = schema.validate(req.body);
if (error) {
return res.status(400).json({
success: false,
message: 'Validation error',
details: error.details.map(detail => detail.message)
});
}
next();
};
};
// Common validation schemas
const schemas = {
instance: Joi.object({
type: Joi.string().valid('postgresql', 'mysql', 'mariadb').required(),
name: Joi.string().alphanum().min(3).max(50).required(),
config: Joi.object({
password: Joi.string().min(8).required(),
storage: Joi.string().pattern(/^\d+[KMGTPEZYkmgtpezy]i?$/).required(),
database: Joi.string().alphanum().optional(),
memory: Joi.string().pattern(/^\d+[KMGTPEZYkmgtpezy]i?$/).optional(),
cpu: Joi.string().pattern(/^\d+m?$/).optional()
}).required()
})
};
module.exports = { validate, schemas };
Step 9: Testing Everything Together
Let's create a simple test script:
#!/bin/bash
echo "🧪 Testing Mini DBaaS Basic Setup"
# Test 1: Check if server starts
echo "📡 Testing server startup..."
cd backend
npm start &
SERVER_PID=$!
sleep 3
# Test 2: Health check
echo "💓 Testing health endpoint..."
HEALTH_RESPONSE=$(curl -s http://localhost:3000/health)
if echo "$HEALTH_RESPONSE" | grep -q "healthy"; then
echo "✅ Health check passed"
else
echo "❌ Health check failed"
echo "Response: $HEALTH_RESPONSE"
fi
# Test 3: API info
echo "📚 Testing API info endpoint..."
API_RESPONSE=$(curl -s http://localhost:3000/)
if echo "$API_RESPONSE" | grep -q "Mini DBaaS API"; then
echo "✅ API info endpoint passed"
else
echo "❌ API info endpoint failed"
echo "Response: $API_RESPONSE"
fi
# Cleanup
kill $SERVER_PID
echo "🎉 Basic setup test completed!"
Make it executable and run:
chmod +x scripts/test-basic.sh
./scripts/test-basic.sh
What We've Accomplished
Today we've successfully:
✅ Set up Kubernetes environment with minikube and necessary addons
✅ Configured Helm repositories for database charts
✅ Created project structure following best practices
✅ Built basic Node.js API server with Express
✅ Implemented health checks and basic routing
✅ Added utility functions for consistent responses
✅ Created validation middleware for API requests
✅ Set up logging with Winston
✅ Tested the basic setup with automated scripts
Next Steps
In Part 3, we'll integrate Kubernetes functionality into our API server and create our first Helm charts for database deployment. We'll learn about:
- Kubernetes client integration
- Helm chart creation and deployment
- Basic database instance management
- Error handling for Kubernetes operations
Troubleshooting
Common Issues
1. minikube won't start
# Check Docker is running
docker ps
# Reset minikube if needed
minikube delete
minikube start --cpus=4 --memory=8192
2. Port 3000 already in use
# Find and kill the process
lsof -ti:3000 | xargs kill -9
3. Helm repository issues
# Clear and re-add repositories
helm repo remove bitnami
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
Summary
We now have a solid foundation for our mini DBaaS platform! Our API server is running, our Kubernetes environment is ready, and we have a clean project structure to build upon.
In the next part, we'll dive into Kubernetes integration and start deploying actual database instances. Get ready to see your first PostgreSQL instance running in Kubernetes! 🚀
Series Navigation:
- Part 1: Architecture Overview
- Part 2: Environment Setup & Basic API Server (this post)
- Part 3: Kubernetes Integration & Helm Charts
- Part 4: Database Instance Management
- Part 5: Backup & Recovery with CSI VolumeSnapshots
- Part 6: High Availability with PostgreSQL Operator
- Part 7: Multi-Tenant Features & Final Testing
Top comments (0)