DEV Community

Cover image for Building a Multi-Application Kubernetes Marketplace: TCP/UDP App Onboarding at Scale
Damilare Ogundele
Damilare Ogundele

Posted on

Building a Multi-Application Kubernetes Marketplace: TCP/UDP App Onboarding at Scale

Introduction

Recently, I tackled a comprehensive marketplace app onboarding project that involved deploying various TCP/UDP and HTTP applications on Kubernetes. This post shares the technical journey, challenges faced, and solutions implemented while onboarding applications like MySQL, MongoDB, RabbitMQ, and others to our marketplace platform.

The Challenge

The goal was to create a standardized onboarding process for diverse applications with different networking requirements:

  • TCP/UDP applications requiring Network Load Balancers
  • HTTP applications needing ingress controllers and SSL termination
  • Database applications requiring persistent storage and proper health checks
  • Message brokers with multiple port configurations

Architecture Overview

Our solution leverages several key AWS and Kubernetes components:

Network Load Balancer Configuration

services:
  - name: tcp
    type: LoadBalancer
    protocol: TCP
    port: 3306
    targetPort: 3306
    lb:
      scheme: internet-facing
      type: nlb-ip
      target_type: ip
      healthcheck_protocol: TCP
      healthcheck_interval: 10
      healthcheck_timeout: 6
Enter fullscreen mode Exit fullscreen mode

Helm Chart Structure

We standardized our deployments using a consistent Helm chart pattern with these key sections:

  • Global configuration for app identification and DNS
  • Deployment specs with resource limits and security contexts
  • Service definitions for both internal and external access
  • Storage management with persistent volumes
  • Secrets handling for sensitive configuration

Case Study: RabbitMQ Deployment

RabbitMQ presented an interesting challenge with its dual-port requirement (AMQP protocol on 5672 and Management UI on 15672). Here's how we handled it:

containers:
  - name: rabbitmq
    image: rabbitmq:4-management
    ports:
      - 5672  # AMQP
      - 15672 # Management UI
    livenessProbe:
      httpGet:
        path: /
        port: 15672
      initialDelaySeconds: 60
      periodSeconds: 30
    readinessProbe:
      httpGet:
        path: /
        port: 15672
      initialDelaySeconds: 30
      periodSeconds: 10

services:
  - name: amqp
    type: ClusterIP
    protocol: TCP
    port: 5672
    targetPort: 5672
  - name: management
    type: ClusterIP
    protocol: TCP
    port: 15672
    targetPort: 15672
    ingress:
      cert_issuer: "letsencrypt"
      class: nginx
Enter fullscreen mode Exit fullscreen mode

Database Deployment Patterns

For database applications like MySQL, we focused on:

Persistent Storage

volumes:
  - name: mysql-data-volume
    storageClassName: ebs-sc
    storage: 8Gi
    accessModes:
      - ReadWriteOnce
Enter fullscreen mode Exit fullscreen mode

Health Checks

readinessProbe:
  tcpSocket:
    port: 3306
  initialDelaySeconds: 30
  periodSeconds: 10
livenessProbe:
  tcpSocket:
    port: 3306
  initialDelaySeconds: 60
  periodSeconds: 30
Enter fullscreen mode Exit fullscreen mode

Key Learnings

1. Standardization is Critical

Creating a consistent Helm chart structure across all applications significantly reduced deployment complexity and improved maintainability.

2. Health Check Strategy

Different applications require different health check approaches:

  • HTTP applications: Use HTTP GET requests to health endpoints
  • Databases: Use TCP socket checks on primary ports
  • Message brokers: Check management interfaces when available

3. Security Context Management

Proper user and group ID management is essential for persistent storage:

securityContext:
  runAsUser: 999
  fsGroup: 999
Enter fullscreen mode Exit fullscreen mode

4. Resource Management

Setting appropriate resource limits prevents applications from consuming excessive cluster resources:

resources:
  requests:
    cpu: 200m
    memory: 512Mi
  limits:
    cpu: 1000m
    memory: 1Gi
Enter fullscreen mode Exit fullscreen mode

Automation and Testing

We implemented automated testing procedures to ensure each application deployment meets our marketplace standards:

  • Connectivity tests for TCP/UDP services
  • SSL certificate validation for HTTPS services
  • Persistent storage verification for stateful applications
  • Health check validation for all deployments

Results and Impact

The standardized onboarding process has enabled us to:

  • Reduce deployment time by 70%
  • Maintain consistent security policies across all applications
  • Simplify troubleshooting with standardized logging and monitoring
  • Scale our marketplace offerings efficiently

Future Enhancements

Looking ahead, we're planning to:

  • Implement GitOps workflows for automated deployments
  • Add application-specific monitoring dashboards
  • Develop self-service onboarding tools for developers
  • Expand support for more complex multi-tier applications

Conclusion

Building a multi-application Kubernetes marketplace requires careful planning, standardization, and attention to the unique requirements of each application type. By leveraging Helm charts, AWS Load Balancers, and Kubernetes best practices, we've created a robust platform that can scale with our growing marketplace needs.

The key takeaway is that while each application has unique requirements, a well-designed template system can accommodate this diversity while maintaining operational consistency.


Top comments (0)