DEV Community

Cover image for Deploying a MongoDB Collection Generator on Kubernetes
Dmitry Romanoff
Dmitry Romanoff

Posted on

Deploying a MongoDB Collection Generator on Kubernetes

Creating a utility to generate 100 MongoDB collections, each populated with 1 million random documents, and deploying it on Kubernetes involves several steps. This guide walks through the process, from setting up a Kubernetes environment to generating the collections and deploying the job in a dedicated namespace.

Deploying a MongoDB Collection Generator on Kubernetes

1. Setting Up Your Kubernetes Environment

Ensure you have a Kubernetes cluster (such as GKE, EKS, AKS, or Minikube) and configure kubectl to connect to it.

2. Create a Dedicated Namespace

To keep this deployment isolated, create a namespace called my-lab:

kubectl create namespace my-lab
kubectl get ns my-lab
Enter fullscreen mode Exit fullscreen mode

3. Deploy MongoDB on Kubernetes

Create a Persistent Volume (PV)

Create a mongo-pv.yaml file to define a persistent volume for MongoDB data:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
  namespace: my-lab
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data/mongo
Enter fullscreen mode Exit fullscreen mode

Apply the PV:

kubectl apply -f mongo-pv.yaml
Enter fullscreen mode Exit fullscreen mode

Create a Persistent Volume Claim (PVC)

Define a persistent volume claim in mongo-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
  namespace: my-lab
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
Enter fullscreen mode Exit fullscreen mode

Apply the PVC:

kubectl apply -f mongo-pvc.yaml
Enter fullscreen mode Exit fullscreen mode

Create a MongoDB Deployment

Define the MongoDB deployment and service in mongo-deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
  namespace: my-lab
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
        - name: mongo
          image: mongo:latest
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              value: "root"
            - name: MONGO_INITDB_ROOT_PASSWORD
              value: "password"
          volumeMounts:
            - name: mongo-storage
              mountPath: /data/db
      volumes:
        - name: mongo-storage
          persistentVolumeClaim:
            claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: mongo
  namespace: my-lab
spec:
  type: ClusterIP
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    app: mongo
Enter fullscreen mode Exit fullscreen mode

Apply the deployment:

kubectl apply -f mongo-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

4. Connect to MongoDB

Verify the MongoDB deployment by connecting to it:

kubectl exec -it <mongo-pod-name> -n my-lab -- mongosh -u root -p password
Enter fullscreen mode Exit fullscreen mode

5. Verify Persistence

Scale down and then back up the MongoDB deployment to ensure data persists:

kubectl scale deployment mongo --replicas=0 -n my-lab
kubectl scale deployment mongo --replicas=1 -n my-lab
Enter fullscreen mode Exit fullscreen mode

6. Create a Python Utility for Collection Generation

Using Python, define a script to create collections and populate them with random documents:

import random
import string
import pymongo
from pymongo import MongoClient

def random_string(length=10):
    return ''.join(random.choices(string.ascii_letters + string.digits, k=length))

def create_collections_and_populate(db_name='mydatabase', collections_count=100, documents_per_collection=1_000_000):
    client = MongoClient('mongodb://root:password@mongo:27017/')
    db = client[db_name]

    for i in range(collections_count):
        collection_name = f'collection_{i+1}'
        collection = db[collection_name]
        print(f'Creating collection: {collection_name}')

        bulk_data = [{'name': random_string(), 'value': random.randint(1, 100)} for _ in range(documents_per_collection)]
        collection.insert_many(bulk_data)
        print(f'Inserted {documents_per_collection} documents into {collection_name}')

if __name__ == "__main__":
    create_collections_and_populate()
Enter fullscreen mode Exit fullscreen mode

7. Dockerize the Python Utility

Create a Dockerfile to containerize the Python script:

FROM python:3.9-slim

WORKDIR /app
COPY mongo_populator.py .
RUN pip install pymongo

CMD ["python", "mongo_populator.py"]
Enter fullscreen mode Exit fullscreen mode

Build and push the image to a container registry:

docker build -t <your-docker-repo>/mongo-populator:latest .
docker push <your-docker-repo>/mongo-populator:latest
Enter fullscreen mode Exit fullscreen mode

8. Create a Kubernetes Job

Define a job in mongo-populator-job.yaml to run the collection generation script:

apiVersion: batch/v1
kind: Job
metadata:
  name: mongo-populator
  namespace: my-lab
spec:
  template:
    spec:
      containers:
        - name: mongo-populator
          image: <your-docker-repo>/mongo-populator:latest
          env:
            - name: MONGO_URI
              value: "mongodb://root:password@mongo:27017/"
      restartPolicy: Never
  backoffLimit: 4
Enter fullscreen mode Exit fullscreen mode

Apply the job:

kubectl apply -f mongo-populator-job.yaml
Enter fullscreen mode Exit fullscreen mode

9. Verify Collection Generation

After the job completes, connect to MongoDB to examine the data:

kubectl exec -it <mongo-pod-name> -n my-lab -- mongosh -u root -p password
Enter fullscreen mode Exit fullscreen mode

In MongoDB:

use mydatabase
show collections
db.collection_9.find().limit(5).pretty()

db.getCollectionNames().forEach(function(collection) {
     var count = db[collection].countDocuments();
     print(collection + ": " + count + " documents");
 });

Enter fullscreen mode Exit fullscreen mode

Each collection should contain 1 million documents, confirming that the data generation job was successful.

Top comments (0)