DEV Community

Indal Kumar
Indal Kumar

Posted on

Kong API Gateway Setup Basic to advance usages

Prerequisites

  • Docker and Docker Compose installed
  • Basic understanding of API concepts
  • Terminal access

Step 1: Setting Up Kong Using Docker Compose

First, create a new directory for your Kong project and create a docker-compose.yml file:

version: '3.7'

services:
  kong-database:
    image: postgres:13
    container_name: kong-database
    environment:
      POSTGRES_USER: kong
      POSTGRES_DB: kong
      POSTGRES_PASSWORD: kongpass
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "kong"]
      interval: 5s
      timeout: 5s
      retries: 5

  kong-migration:
    image: kong:latest
    command: kong migrations bootstrap
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: kongpass
    depends_on:
      - kong-database

  kong:
    image: kong:latest
    container_name: kong
    environment:
      KONG_DATABASE: postgres
      KONG_PG_HOST: kong-database
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: kongpass
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001
      KONG_PROXY_LISTEN: 0.0.0.0:8000
    ports:
      - "8000:8000"
      - "8001:8001"
      - "8443:8443"
      - "8444:8444"
    depends_on:
      - kong-migration
      - kong-database

  mock-api:
    image: mockserver/mockserver:latest
    ports:
      - "1080:1080"
Enter fullscreen mode Exit fullscreen mode

Step 2: Start the Kong Environment

# Start all services
docker-compose up -d

# Verify Kong is running
curl http://localhost:8001
Enter fullscreen mode Exit fullscreen mode

Step 3: Create Your First Service

A Service in Kong represents an upstream API or microservice. Let's create one that points to our mock server:

curl -i -X POST http://localhost:8001/services \
  --data name=mock-service \
  --data url='http://mock-api:1080'
Enter fullscreen mode Exit fullscreen mode

Step 4: Create a Route

Routes determine how requests are sent to Services:

curl -i -X POST http://localhost:8001/services/mock-service/routes \
  --data paths[]=/api \
  --data name=mock-route
Enter fullscreen mode Exit fullscreen mode

Step 5: Configure Basic Authentication

Let's add basic authentication to secure our API:

# Enable the basic-auth plugin
curl -i -X POST http://localhost:8001/services/mock-service/plugins \
  --data name=basic-auth \
  --data config.hide_credentials=true

# Create a consumer
curl -i -X POST http://localhost:8001/consumers \
  --data username=john

# Create credentials for the consumer
curl -i -X POST http://localhost:8001/consumers/john/basic-auth \
  --data username=john \
  --data password=secret
Enter fullscreen mode Exit fullscreen mode

Step 6: Test the Setup

# This should fail (unauthorized)
curl -i http://localhost:8000/api

# This should succeed
curl -i http://localhost:8000/api \
  -H 'Authorization: Basic am9objpzZWNyZXQ='
Enter fullscreen mode Exit fullscreen mode

Step 7: Add Rate Limiting

Let's add rate limiting to protect our API:

curl -i -X POST http://localhost:8001/services/mock-service/plugins \
  --data name=rate-limiting \
  --data config.minute=5 \
  --data config.hour=100
Enter fullscreen mode Exit fullscreen mode

Common Plugins and Their Uses

  1. Cors: Enable cross-origin resource sharing
curl -i -X POST http://localhost:8001/services/mock-service/plugins \
  --data name=cors \
  --data config.origins=* \
  --data config.methods=GET,POST,PUT,DELETE \
  --data config.headers=Content-Type,Authorization
Enter fullscreen mode Exit fullscreen mode
  1. Key Authentication: Alternative to basic auth
curl -i -X POST http://localhost:8001/services/mock-service/plugins \
  --data name=key-auth
Enter fullscreen mode Exit fullscreen mode
  1. OAuth2: For more complex authentication flows
curl -i -X POST http://localhost:8001/services/mock-service/plugins \
  --data name=oauth2 \
  --data config.scopes=email,profile \
  --data config.mandatory_scope=true
Enter fullscreen mode Exit fullscreen mode

Monitoring and Admin Tasks

Check Status

# Get basic status
curl http://localhost:8001/status

# List all services
curl http://localhost:8001/services

# List all routes
curl http://localhost:8001/routes

# List all plugins
curl http://localhost:8001/plugins
Enter fullscreen mode Exit fullscreen mode

Common Issues and Solutions

  1. Database Connection Issues

    • Check if Postgres is running: docker ps | grep postgres
    • Verify connection settings in docker-compose.yml
    • Ensure migrations have completed successfully
  2. Plugin Problems

    • Verify plugin configuration: curl http://localhost:8001/plugins/{plugin-id}
    • Check Kong error logs: docker logs kong
    • Ensure plugin is compatible with your Kong version
  3. Performance Issues

    • Monitor Kong's resources: docker stats kong
    • Check rate limiting configuration
    • Review upstream service response times

Best Practices

  1. Security

    • Always use HTTPS in production
    • Implement rate limiting
    • Use appropriate authentication methods
    • Regularly update Kong and plugins
  2. Performance

    • Enable caching when appropriate
    • Monitor and adjust rate limits
    • Use connection pooling
    • Configure appropriate timeouts
  3. Maintenance

    • Regular backup of Kong's database
    • Monitor Kong's logs
    • Keep Kong and plugins updated
    • Document all configuration changes

Clean Up

To stop and remove all containers:

docker-compose down
Enter fullscreen mode Exit fullscreen mode

To remove all data (including database):

docker-compose down -v

## Advanced Features

### 1. Load Balancing and Service Discovery

Kong can load balance traffic across multiple upstream services:

Enter fullscreen mode Exit fullscreen mode


bash

Create an upstream

curl -X POST http://localhost:8001/upstreams \
--data name=mock-upstream

Add targets to the upstream

curl -X POST http://localhost:8001/upstreams/mock-upstream/targets \
--data target=mock-service-1:1080 \
--data weight=100

curl -X POST http://localhost:8001/upstreams/mock-upstream/targets \
--data target=mock-service-2:1080 \
--data weight=100

Create a service using the upstream

curl -X POST http://localhost:8001/services \
--data name=balanced-service \
--data host=mock-upstream


### 2. JWT Authentication

Set up JWT authentication for more secure API access:

Enter fullscreen mode Exit fullscreen mode


bash

Enable JWT plugin

curl -X POST http://localhost:8001/services/mock-service/plugins \
--data name=jwt

Create a consumer

curl -X POST http://localhost:8001/consumers \
--data username=jwt-user

Create JWT credentials

curl -X POST http://localhost:8001/consumers/jwt-user/jwt \
--data algorithm=HS256 \
--data key=custom-key \
--data secret=custom-secret


### 3. Request Transformation

Modify requests and responses using the request-transformer plugin:

Enter fullscreen mode Exit fullscreen mode


bash
curl -X POST http://localhost:8001/services/mock-service/plugins \
--data name=request-transformer \
--data config.add.headers[]=custom-header:value \
--data config.remove.headers[]=remove-this-header \
--data config.add.querystring[]=new-param:value


### 4. Caching with Redis

Set up Redis for caching responses:

Enter fullscreen mode Exit fullscreen mode


yaml

Add to docker-compose.yml

services:
redis:
image: redis:6
ports:
- "6379:6379"


Configure proxy-cache plugin:

Enter fullscreen mode Exit fullscreen mode


bash
curl -X POST http://localhost:8001/services/mock-service/plugins \
--data name=proxy-cache \
--data config.strategy=redis \
--data config.redis.host=redis \
--data config.redis.port=6379 \
--data config.cache_ttl=300


### 5. Traffic Control with Canary Releases

Implement canary releases using the canary plugin:

Enter fullscreen mode Exit fullscreen mode


bash

Create canary service

curl -X POST http://localhost:8001/services \
--data name=canary-service \
--data url='http://canary-api:1080'

Configure canary routing

curl -X POST http://localhost:8001/services/mock-service/plugins \
--data name=canary \
--data config.percentage=20 \
--data config.upstream_host=canary-service


### 6. Custom Plugins

Create a simple custom plugin (example in Lua):

Enter fullscreen mode Exit fullscreen mode


lua
-- custom-plugin/handler.lua
local CustomHandler = {
VERSION = "1.0.0",
PRIORITY = 1000,
}

function CustomHandler:access(conf)
kong.service.request.set_header("X-Custom-Header", "custom-value")
end

return CustomHandler

-- custom-plugin/schema.lua
return {
name = "custom-plugin",
fields = {
{ config = {
type = "record",
fields = {
{ some_setting = { type = "string", default = "default-value" } },
},
}, },
},
}


### 7. Monitoring and Logging

Set up Prometheus monitoring:

Enter fullscreen mode Exit fullscreen mode


bash

Enable Prometheus plugin

curl -X POST http://localhost:8001/plugins \
--data name=prometheus

Add Prometheus service to docker-compose.yml

services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"


Configure detailed logging:

Enter fullscreen mode Exit fullscreen mode


bash

Enable file-log plugin

curl -X POST http://localhost:8001/services/mock-service/plugins \
--data name=file-log \
--data config.path=/usr/local/kong/logs/custom.log \
--data config.reopen=true


### 8. API Versioning

Implement API versioning using header-based routing:

Enter fullscreen mode Exit fullscreen mode


bash

Create routes for different versions

curl -X POST http://localhost:8001/services/mock-service/routes \
--data paths[]=/api \
--data headers.version=v1

curl -X POST http://localhost:8001/services/mock-service-v2/routes \
--data paths[]=/api \
--data headers.version=v2


### 9. GraphQL Support

Enable GraphQL routing and introspection:

Enter fullscreen mode Exit fullscreen mode


bash

Create GraphQL service

curl -X POST http://localhost:8001/services \
--data name=graphql-service \
--data url='http://graphql-api:3000'

Enable GraphQL validation plugin

curl -X POST http://localhost:8001/services/graphql-service/plugins \
--data name=graphql-proxy-cache-advanced \
--data config.schema_path=/usr/local/kong/declarative/schema.graphql


These advanced features demonstrate Kong's flexibility in handling complex API management scenarios. Remember to adjust configurations based on your specific requirements and production environment needs.

## Production Deployment Patterns

### 1. High Availability Setup

Configure Kong for high availability:

Enter fullscreen mode Exit fullscreen mode


yaml

docker-compose.ha.yml

version: '3.7'

services:
kong-1:
image: kong:latest
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_CLUSTER_LISTEN: 0.0.0.0:8005
KONG_CLUSTER_PEERS: kong-2:8005,kong-3:8005

kong-2:
image: kong:latest
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_CLUSTER_LISTEN: 0.0.0.0:8005
KONG_CLUSTER_PEERS: kong-1:8005,kong-3:8005

kong-3:
image: kong:latest
environment:
KONG_DATABASE: postgres
KONG_PG_HOST: kong-database
KONG_PROXY_LISTEN: 0.0.0.0:8000
KONG_CLUSTER_LISTEN: 0.0.0.0:8005
KONG_CLUSTER_PEERS: kong-1:8005,kong-2:8005

nginx-lb:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "443:443"


### 2. Kubernetes Deployment

Deploy Kong on Kubernetes using Helm:

Enter fullscreen mode Exit fullscreen mode


bash

Add Kong Helm repository

helm repo add kong https://charts.konghq.com
helm repo update

Install Kong

helm install kong kong/kong --namespace kong --create-namespace

Configure ingress

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kong-ingress
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:

  • host: api.example.com http: paths:
    • path: / pathType: Prefix backend: service: name: mock-service port: number: 80 EOF

### 3. Multi-Region Setup

Configure Kong for multi-region deployment:

Enter fullscreen mode Exit fullscreen mode


yaml

Primary region configuration

environment:
KONG_ROLE: control_plane
KONG_CLUSTER_CERT: /path/to/cluster.crt
KONG_CLUSTER_CERT_KEY: /path/to/cluster.key
KONG_CLUSTER_LISTEN: 0.0.0.0:8005
KONG_CLUSTER_TELEMETRY_LISTEN: 0.0.0.0:8006

Data plane configuration (secondary regions)

environment:
KONG_ROLE: data_plane
KONG_CLUSTER_CONTROL_PLANE: control-plane.example.com:8005
KONG_CLUSTER_TELEMETRY_ENDPOINT: control-plane.example.com:8006
KONG_CLUSTER_CERT: /path/to/cluster.crt
KONG_CLUSTER_CERT_KEY: /path/to/cluster.key


### 4. Security Hardening

Implement security best practices:

Enter fullscreen mode Exit fullscreen mode


bash

Enable IP restriction plugin

curl -X POST http://localhost:8001/plugins \
--data name=ip-restriction \
--data config.allow=["10.0.0.0/8"]

Configure security headers

curl -X POST http://localhost:8001/plugins \
--data name=response-transformer \
--data config.add.headers[]=Strict-Transport-Security:max-age=31536000 \
--data config.add.headers[]=X-Frame-Options:DENY \
--data config.add.headers[]=X-Content-Type-Options:nosniff

Enable mutual TLS

curl -X POST http://localhost:8001/plugins \
--data name=mtls \
--data config.ca_certificates[]=@ca.crt \
--data config.revocation_check_mode=SKIP_UNHANDLED


### 5. Performance Optimization

Tune Kong for optimal performance:

Enter fullscreen mode Exit fullscreen mode


yaml
environment:
# Worker configuration
KONG_NGINX_WORKER_PROCESSES: auto
KONG_NGINX_WORKER_CONNECTIONS: 2048

# Proxy buffering
KONG_NGINX_PROXY_BUFFER_SIZE: 128k
KONG_NGINX_PROXY_BUFFERS: 4 256k

# Keepalive settings
KONG_NGINX_PROXY_KEEPALIVE: 60
KONG_UPSTREAM_KEEPALIVE_POOL_SIZE: 1000

# DNS resolver settings
KONG_DNS_ORDER: LAST,SRV,A,CNAME
KONG_DNS_NO_SYNC: "off"
KONG_DNS_ERROR_TTL: 1


### 6. Disaster Recovery

Implement backup and recovery procedures:

Enter fullscreen mode Exit fullscreen mode


bash

Backup Kong's configuration

kong config db_export config.yml

Backup Postgres database

pg_dump -U kong -h localhost -p 5432 kong > kong_backup.sql

Restore configuration

kong config db_import config.yml

Restore database

psql -U kong -h localhost -p 5432 kong < kong_backup.sql


### 7. Blue-Green Deployments

Implement blue-green deployment strategy:

Enter fullscreen mode Exit fullscreen mode


bash

Create blue and green upstreams

curl -X POST http://localhost:8001/upstreams \
--data name=blue-upstream

curl -X POST http://localhost:8001/upstreams \
--data name=green-upstream

Switch traffic using route update

curl -X PATCH http://localhost:8001/routes/main-route \
--data service.host=green-upstream

Gradual transition using canary plugin

curl -X POST http://localhost:8001/routes/main-route/plugins \
--data name=canary \
--data config.percentage=50 \
--data config.upstream_host=green-upstream


These deployment patterns and configurations are essential for running Kong in a production environment. Remember to:

1. Always test configurations in a staging environment first
2. Implement proper monitoring and alerting
3. Keep documentation up to date
4. Regularly review and update security configurations
5. Plan for scalability and future growth
Enter fullscreen mode Exit fullscreen mode

Top comments (0)