DEV Community

Ikponmwosa Omorisiagbon
Ikponmwosa Omorisiagbon

Posted on

Auto-Generate API Gateway Terraform from OpenAPI Specs

The Problem
You've written your OpenAPI spec. It's beautiful, well-documented, and describes your API perfectly. But now you need to deploy it to AWS API Gateway, and that means writing Terraform.
Again.
For the fifth time this month.

hclresource "aws_apigatewayv2_api" "payments_api" {
  name          = "payments-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_route" "payments_post" {
  api_id    = aws_apigatewayv2_api.payments_api.id
  route_key = "POST /payments"
  target    = "integrations/${aws_apigatewayv2_integration.payments.id}"
}

resource "aws_apigatewayv2_integration" "payments" {
  api_id           = aws_apigatewayv2_api.payments_api.id
  integration_type = "HTTP_PROXY"
  integration_uri  = "https://payments.internal.com"
  integration_method = "POST"
}
Enter fullscreen mode Exit fullscreen mode

... repeat for every endpoint

Sound familiar? You're essentially duplicating information that already exists in your OpenAPI spec.
What if Your API Spec Was Your Infrastructure Definition?
This repetition bothered me enough to build something about it. What if instead of maintaining two sources of truth (OpenAPI spec + Terraform), your API specification could generate the infrastructure?
Here's what that looks like:
yaml# payment-api.yaml

openapi: 3.0.3
info:
  title: Payment Service API
  version: 1.0.0
x-service: payments  # Infrastructure hint

paths:
  /process:
    post:
      summary: Process payment
      x-rate-limit:    # Infrastructure configuration
        requests: 100
        period: 60
        burst: 150
      responses:
        '200':
          description: Payment processed
Enter fullscreen mode Exit fullscreen mode

One command later:
bash./striche.sh deploy -s payment-api.yaml -p aws --auto-approve
Your API Gateway is live, with proper rate limiting, routing, and all the Terraform you didn't have to write.
How It Works
The tool (Striche Gateway) follows a simple pipeline:
OpenAPI Spec → Canonical Model → Platform Templates → Terraform → Deployed Infrastructure

  1. Parse the Spec Extract API structure, paths, methods, and custom extensions: typescript// Simplified parsing logic
const spec = await parseOpenAPI('payment-api.yaml');
const service = spec.info['x-service'] || 'default';
const routes = extractRoutes(spec.paths);
const rateLimits = extractRateLimits(spec.paths);
Enter fullscreen mode Exit fullscreen mode
  1. Generate Infrastructure Use Handlebars templates to create clean Terraform: hcl# Generated main.tf
module "{{service}}_service" {
  source = "./modules/service"

  service_name = "{{service}}"
  routes = [
    {{#each routes}}
    {
      path = "{{path}}"
      method = "{{method}}"
      {{#if rateLimit}}
      rate_limit = {{rateLimit.requests}}
      burst_limit = {{rateLimit.burst}}
      {{/if}}
    }{{#unless @last}},{{/unless}}
    {{/each}}
  ]
}
Enter fullscreen mode Exit fullscreen mode
  1. Deploy Standard Terraform workflow - nothing magical, just automated: bash terraform init terraform plan terraform apply The Unified Gateway Pattern The interesting part isn't just generating basic API Gateway configs. It's solving the "multiple microservices, one gateway" problem. Instead of deploying separate gateways for each service: bash# Multiple services through a single gateway ./striche.sh deploy -s auth-api.yaml,payments-api.yaml,orders-api.yaml -p aws Result: One API Gateway with intelligent routing:
POST /auth/login → Auth service backend
POST /payments/process → Payments service backend
GET /orders/{id} → Orders service backend
Enter fullscreen mode Exit fullscreen mode

All through a single endpoint with consolidated rate limiting and monitoring.
OpenAPI Extensions: Infrastructure Hints
The key insight is using OpenAPI's vendor extension mechanism to embed infrastructure configuration:
yamlpaths:

  /login:
    post:
      x-rate-limit:
        requests: 10    # Allow 10 login attempts
        period: 60      # per minute
        burst: 15       # with burst protection
      x-service: auth   # Route to auth backend
These extensions translate directly to Terraform resources:
hclresource "aws_apigatewayv2_route" "auth_login" {
  api_id    = aws_apigatewayv2_api.main.id
  route_key = "POST /auth/login"

  throttle_settings {
    rate_limit  = 10
    burst_limit = 15
  }
}
Enter fullscreen mode Exit fullscreen mode

Real-World Benefits
After using this approach for several microservices deployments:
Time Savings: What used to take 30-45 minutes of Terraform writing now takes 2 minutes
Consistency: Every API Gateway follows the same patterns and best practices
Single Source of Truth: API documentation and infrastructure config live together
Easy Updates: Change the spec, redeploy, infrastructure updates automatically
Generated vs Hand-Written Terraform
The output is standard Terraform that you could have written manually:

out/
├── main.tf              # Clean, readable root config
├── variables.tf         # Parameterized inputs
├── terraform.tfvars     # Service-specific values
├── outputs.tf           # API Gateway URLs and IDs
└── modules/
    └── service/         # Reusable service module
        ├── main.tf
        ├── variables.tf
        └── outputs.tf
Enter fullscreen mode Exit fullscreen mode

No custom providers, no weird abstractions. Just well-structured Terraform that follows community conventions.
When This Makes Sense
This approach works well when:

You have multiple APIs with similar infrastructure patterns
Your team maintains OpenAPI specs anyway
You want consistency across service deployments
Rate limiting and routing rules change frequently

It's probably overkill for:

Single API deployments
Complex custom infrastructure requirements
Teams that don't use OpenAPI specs

Try It Yourself
The tool is open source and available on GitHub:
bash git clone https://github.com/striche-AI/striche-gateway
cd striche-gateway
npm install

Try with the example specs

./striche.sh deploy -s specs/payment-service.yaml -p aws
Enter fullscreen mode Exit fullscreen mode

Currently supports AWS (API Gateway v2), with GCP and Azure planned.
The Bigger Picture
This is really about closing the gap between API design and infrastructure deployment. Your OpenAPI spec already describes your API's behavior - why not let it describe the infrastructure too?
We're moving toward a world where infrastructure is more declarative and less manual. Treating API specifications as infrastructure definitions feels like a natural evolution.

Top comments (0)