Last week, we took the first steps into the world of API management by setting up a self-hosted 3scale environment. Now, we're taking it to the next level by deploying 3scale APIcast on Red Hat OpenShift Service on AWS (ROSA) using the ROSA operator.
Here's what we learned along the way and why this migration makes sense for the infrastructure.
The Starting Point: Setting up Self-Hosted 3scale
The journey began with a traditional self-hosted 3scale deployment. This gave us hands-on experience with:
- API Management Fundamentals: Understanding how 3scale handles API keys, rate limiting, and developer portals
- Configuration Management: Learning the ins and outs of API policies, plans, and applications
- Integration Patterns: Connecting the backend services to the 3scale gateway
While the self-hosted approach worked well for getting started, we quickly realized we wanted the scalability and managed services that come with a cloud-native approach.
What is ROSA?
Red Hat OpenShift Service on AWS (ROSA) is a fully-managed OpenShift service that runs natively on Amazon Web Services (AWS). It combines Red Hat OpenShift with the scalability and reliability of AWS infrastructure.
Key ROSA Benefits:
- Fully Managed: Red Hat handles cluster operations, updates, and maintenance
- AWS Native: Deep integration with AWS services like EBS, ELB, and VPC
- Enterprise Ready: Built-in security, compliance, and support from Red Hat
- Pay-as-you-go: Flexible pricing model based on actual usage
APIcast Deployment Options in ROSA
ROSA provides multiple ways to deploy 3scale APIcast, each suited for different use cases:
1. Operator-Based Deployment (Recommended)
- Declarative Configuration: Uses Custom Resource Definitions (CRDs)
- Lifecycle Management: Automatic updates and scaling
- GitOps Ready: Perfect for CI/CD pipelines
- Multi-Environment Support: Easy promotion between dev/staging/prod
2. Helm Charts
- Template-Based: Traditional Kubernetes deployment method
- Customizable: Fine-grained control over deployment parameters
- Version Control: Easy rollback and version management
3. Manual YAML Deployment
- Direct Control: Complete control over all Kubernetes resources
- Learning Purpose: Great for understanding underlying components
- Custom Scenarios: When you need specific configurations not covered by operators
For this setup, we chose the operator-based approach because it provides the best balance of simplicity and control, especially when integrating with the existing self-hosted 3scale management layer.
Why Replicate APIcast on ROSA?
The decision to deploy APIcast on ROSA wasn't about replacing the self-hosted 3scale management layer — it was about creating a safe, isolated testing environment that replicates our current setup.
The Goal: Keep the existing self-hosted 3scale Admin Portal and System while deploying APIcast gateways on ROSA for isolating the current working system in ROSA with a safe way of testing APIs reaching the backend services.
- High Availability: Eliminate single points of failure in your API gateway layer
- Blue-Green Deployments: Enable zero-downtime updates by switching traffic between replicas
- Testing Environment: Create a production-like environment for testing API changes
- Disaster Recovery: Maintain backup gateways in different availability zones
- Performance Testing: Isolate load testing from production traffic
Understanding the Architecture
Before diving into the implementation, it's crucial to understand where APIcast fits in the request flow:
ROSA (Red Hat OpenShift Service on AWS) contains:
- OpenShift Route: Acts as the ingress controller, handling external traffic routing
- Kubernetes Service: Provides load balancing and service discovery for the APIcast pods
- 3scale APIcast Pods: Multiple instances of the API gateway running as containers
3scale Components:
- APIcast Gateway Pods (running in ROSA): Handle the actual API traffic, rate limiting, authentication, and proxying to backend services
- 3scale Management (self-hosted): Can be deployed separately to manage API policies, analytics, and configuration
Traffic Flow:
- Internet traffic comes through the OpenShift Route
- Gets load balanced by the Kubernetes Service
- Distributed across multiple APIcast gateway pods
- APIcast pods proxy requests to backend APIs
- 3scale Management provides configuration and policies to APIcast pods
The key distinction is that APIcast runs as containerized pods within ROSA, while the 3scale Management component can be self-hosted either within the same ROSA cluster or on separate infrastructure, depending on the deployment preferences.
The APIcast Gateway Role:
- Authentication: Validates API keys and handles OAuth flows
- Rate Limiting: Enforces quotas and throttling policies
- Policy Enforcement: Applies transformation, caching, and security policies
- Analytics Collection: Gathers metrics for monitoring and billing
- Request Routing: Directs traffic to appropriate backend services
This positioning makes APIcast the critical control point for all API traffic—it's not just a proxy, but an intelligent gateway that enforces the API strategy.
Deploying APIcast on ROSA with the Operator
With the ROSA cluster already in place and the 3scale operator installed, we were ready to deploy APIcast instances that would integrate with the existing self-hosted 3scale management layer.
This guide walks through the complete process of replicating an existing APIcast deployment connected to a self-hosted 3scale management platform.
Configuring APIcast Custom Resources
The operator uses Custom Resource Definitions (CRDs) to manage APIcast deployments:
apiVersion: apps.3scale.net/v1alpha1
kind: APIcast
metadata:
name: apicast-development
namespace: apicast
spec:
adminPortalURL: https://self-hosted-3scale-admin.example.com
exposedHost:
host: apps.[domain of our openshift].openshiftapps.com
replicas: 3
resources:
limits:
cpu: 1000m
memory: 128Mi
requests:
cpu: 500m
memory: 64Mi
Prerequisites
- OpenShift cluster access with appropriate permissions
- Working self-hosted 3scale deployment
- Access to 3scale admin portal
- oc CLI tool configured
- curl for testing
Step 1: Analyze Existing APIcast Configuration
First, we need to understand how the current APIcast is configured. This ensures the replica maintains the same behavior.
# Get the deployment configuration
oc get deployment [source-apicast-name] -n [source-namespace] -o yaml > apicast-config.yaml
# Check environment variables
oc get deployment [source-apicast-name] -n [source-namespace] \
-o jsonpath='{.spec.template.spec.containers[0].env}' | jq .
Key Environment Variables to Note
[
{
"name": "THREESCALE_PORTAL_ENDPOINT",
"value": "https://TOKEN@3scale-admin.apps.cluster.domain.com"
},
{
"name": "THREESCALE_DEPLOYMENT_ENV",
"value": "development"
}
]
Step 2: Create the Replica Namespace
Organize the apicast replica in a dedicated namespace for better isolation and management.
# Create replica namespace
oc create namespace apicast-replica
# Label for organization
oc label namespace apicast-replica purpose=apicast-replica
Step 3: Deploy APIcast Replica
Create Deployment Configuration (based from the apicast-config.yaml with updated metadata names)
apiVersion: apps/v1
kind: Deployment
metadata:
name: apicast-replica
namespace: apicast-replica
labels:
app: apicast-replica
spec:
replicas: 1
selector:
matchLabels:
app: apicast-replica
template:
metadata:
labels:
app: apicast-replica
spec:
containers:
- name: apicast
image: quay.io/3scale/apicast:latest
env:
- name: THREESCALE_PORTAL_ENDPOINT
value: "https://TOKEN@3scale-admin.apps.[domain of our openshift].openshiftapps.com"
- name: THREESCALE_DEPLOYMENT_ENV
value: "development"
- name: APICAST_CONFIGURATION_LOADER
value: "boot"
- name: APICAST_LOG_LEVEL
value: "notice"
- name: APICAST_PATH_ROUTING
value: "true"
- name: APICAST_RESPONSE_CODES
value: "true"
livenessProbe:
httpGet:
path: /status/live
port: management
initialDelaySeconds: 10
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /status/ready
port: management
initialDelaySeconds: 15
timeoutSeconds: 1
Deploy the Replica
# Apply the deployment
oc apply -f apicast-replica-deployment.yaml
# Watch the deployment
oc get pods -n apicast-replica -w
Step 4: Create Service and Route for the Apicast
Service Configuration
apiVersion: v1
kind: Service
metadata:
name: apicast-replica-service
namespace: apicast-replica
spec:
selector:
app: apicast-replica
ports:
- name: proxy
port: 8080
targetPort: 8080
- name: management
port: 8090
targetPort: 8090
Route Configuration
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: api-replica
namespace: apicast-replica
spec:
host: api-replica.apps.[domain of our openshift].openshiftapps.com
to:
kind: Service
name: apicast-replica-service
port:
targetPort: proxy
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
Then apply configs
# Apply service and route
oc apply -f apicast-replica-service.yaml
oc apply -f apicast-replica-route.yaml
If we go to Rosa and check, it will look like this
Step 5: Configure 3scale for the Replica
APIcast loads services based on the gateway URL configured in 3scale, not just the connection to 3scale.
The Configuration Challenge
When APIcast starts, it:
- Connects to 3scale using THREESCALE_PORTAL_ENDPOINT
- Downloads service configurations for THREESCALE_DEPLOYMENT_ENV (staging/production)
- Only loads services where the proxy endpoint matches the incoming request host
Update Product Configuration
In the self-hosted 3scale Admin Portal:
- Go to Products > Select the product (e.g., "Test API")
- Go to Integration > Settings
- Update the Staging Public Base URL:
https://apicast-replica.apps.[domain].openshiftapps.com
Critical Step: Go to Integration → Configuration
Click "Promote v[X] to Staging"
Important: The "Promote to Staging" step is what actually deploys your configuration changes to APIcast. Without this, APIcast continues using the old configuration.
Step 6: Restart and Verify
Restart APIcast to Load New Configuration
#Force APIcast to reload configuration
> oc rollout restart deployment/apicast-replica -n apicast-replica
#Result
> deployment.apps/apicast-replica restarted
# Wait for pods to be ready
> oc get pods -n apicast-replica -w
Test Authentication
curl -v "https://api-replica.../status/ready" -H "user-key: API_APPLICATION_KEY"
Test Actual API Endpoints
curl -X POST 'https://api-replica.../your/api/endpoint' \
-H 'user-key: APPLICATION_API_KEY' \
-H 'Content-Type: application/json' \
-d '{"test": "data"}'
Conclusion
Replicating APIcast with self-hosted 3scale requires careful attention to service configuration alignment. The key insight is that APIcast service discovery depends on matching the configured gateway URLs in 3scale with the actual domains your replica serves.
By following this guide, we've created a fully functional APIcast replica that can serve as a backup, testing environment, or part of a blue-green deployment strategy. Remember that the "Promote to Staging" step in 3scale is critical - configuration changes don't take effect until promoted and APIcast is restarted.
The replica approach provides excellent flexibility for API gateway management while maintaining high availability and enabling safe deployment practices. With proper monitoring and procedures in place, you can confidently manage multiple APIcast instances serving the critical API infrastructure.
Additional Resources
This guide was created based on real-world APIcast replica deployment experience. The configuration examples are anonymized but reflect actual working deployments.
Top comments (0)