DEV Community

Cover image for Self-Hosting n8n on AWS ECS Fargate with Terraform, Okta OIDC SSO and a Shared ALB + RDS
Anderson Leite
Anderson Leite

Posted on • Edited on

Self-Hosting n8n on AWS ECS Fargate with Terraform, Okta OIDC SSO and a Shared ALB + RDS

TL;DR: A practical walkthrough of deploying n8n on AWS ECS Fargate using Terraform, sharing an existing ALB and RDS instance, wiring up OIDC SSO via a community init-container pattern, and all the sharp edges you'll hit along the way.


Why self-host n8n?

n8n is a powerful workflow automation platform. The cloud version is great, but once your team starts building internal automations that touch internal APIs, credentials, or sensitive data, self-hosting becomes the obvious move. You get full data residency, SSO enforcement, and no per-workflow pricing.

Since this is (at least for now) a PoC, wouldn't make much sense pay for a license, however I also didn't wanted to keep managing users, so I challenged myself to add SSO to it, even in the community edition (yes, it's possible).

This post covers the full AWS infrastructure we built: every Terraform resource, the SSO integration, and the surprising number of things that look right but aren't.


Architecture overview

                        ┌──────────────────────-──┐
                        │    Cloudflare DNS       │
                        │  n8n.example.com → ALB  │
                        │  proxied = false        │
                        └──────────┬────────────-─┘
                                   │ HTTPS :443
                                   ▼
              ┌───────────────────────────────────────────┐
              │            VPC (10.10.0.0/16)             │
              │                                           │
              │  Public Subnets                           │
              │  ┌───────────────────────────────────┐    │
              │  │   Internet-facing ALB             │    │
              │  │   HTTP :80 → redirect HTTPS       │    │
              │  │   HTTPS :443 → forward to ECS     │    │
              │  │   ACM cert: n8n.example.com       │    │
              │  │   NAT Gateway (for outbound OIDC) │    │
              │  └────────────────┬──────────────────┘    │
              │                   │ HTTP :5678            │
              │  Private Subnets  ▼                       │
              │  ┌──────────────────────────────────-─┐   │
              │  │  ECS Fargate Task (n8nio/n8n)      │   │
              │  │  1 vCPU / 2 GB  desired_count=1    │   │
              │  │  Init container → hooks.js         │   │
              │  │  No public IP                      │   │
              │  │         │ PostgreSQL :5432         │   │
              │  │         ▼                          │   │
              │  │  Shared RDS PostgreSQL 17          │   │
              │  └───────────────────────────────────-┘   │
              │                                           │
              └─────────────┬─────────────┬───────────-──┘
                            ▼             ▼
                    [Secrets Manager]  [CloudWatch]
                    enc-key, db creds  /aws/ecs/n8n
Enter fullscreen mode Exit fullscreen mode

Key design decisions:

  • Shared ALB and RDS — rather than spinning up dedicated infrastructure, n8n reuses the existing load balancer and PostgreSQL instance from our tooling environment. This saved ~$48/month compared to dedicated resources.
  • Single task, no autoscaling — n8n is not horizontally scalable (the community edition uses a single-node SQLite-or-Postgres engine). desired_count = 1, period.
  • OIDC SSO via init container — we wanted Okta SSO without an Enterprise license. The community cweagans/n8n-oidc hooks approach works, but requires a specific ECS init container pattern to inject the hooks file without breaking n8n's startup.

Terraform structure

All files live under terraform/ in a single environment root. The n8n deployment is split across purpose-scoped files:

File What it creates
n8n-sg.tf Security groups for ECS task, RDS ingress rule, VPC endpoint rule
n8n-rds.tf RDS database + user (manual bootstrap)
n8n-secrets.tf Secrets Manager entries: DB creds, encryption key, OIDC client secret
n8n-s3.tf S3 bucket for binary data (future use)
n8n-iam.tf Task execution role + task role
n8n-alb.tf ACM certificate; ALB target group + listener rule in alb.tf
n8n-ecs.tf ECS task definition (init container + app) + ECS service
n8n-dns.tf Cloudflare CNAME record

The Terraform, piece by piece

1. Security groups (n8n-sg.tf)

n8n gets its own ECS security group. It does not get its own ALB security group; the shared ALB's managed SG handles that.

resource "aws_security_group" "n8n_ecs" {
  name        = "${local.name_prefix}-n8n-ecs"
  description = "Allow traffic from shared ALB to n8n ECS tasks"
  vpc_id      = data.aws_vpc.tooling.id
}

# Only allow traffic from the ALB
resource "aws_vpc_security_group_ingress_rule" "n8n_ecs_from_alb" {
  security_group_id            = aws_security_group.n8n_ecs.id
  from_port                    = 5678
  to_port                      = 5678
  ip_protocol                  = "tcp"
  referenced_security_group_id = module.alb.security_group_id
}

# All egress — ECS tasks need to reach Okta (via NAT), RDS, and AWS VPC endpoints
resource "aws_vpc_security_group_egress_rule" "n8n_ecs_all_egress" {
  security_group_id = aws_security_group.n8n_ecs.id
  ip_protocol       = "-1"
  cidr_ipv4         = "0.0.0.0/0"
}

# Allow n8n ECS tasks to reach VPC interface endpoints (Secrets Manager, CloudWatch, ECR)
resource "aws_vpc_security_group_ingress_rule" "vpc_endpoints_from_n8n_ecs" {
  security_group_id            = aws_security_group.vpc_endpoints.id
  from_port                    = 443
  to_port                      = 443
  ip_protocol                  = "tcp"
  referenced_security_group_id = aws_security_group.n8n_ecs.id
}

# Allow n8n to reach the shared RDS instance
resource "aws_vpc_security_group_ingress_rule" "rds_from_n8n_ecs" {
  security_group_id            = aws_security_group.rds.id
  from_port                    = 5432
  to_port                      = 5432
  ip_protocol                  = "tcp"
  referenced_security_group_id = aws_security_group.n8n_ecs.id
}
Enter fullscreen mode Exit fullscreen mode

Gotcha — SG rule state drift: We observed aws_vpc_security_group_ingress_rule resources disappearing from AWS while remaining in Terraform state (terraform plan showed no diff). This caused ResourceInitializationError on task start. If your ECS task keeps failing to initialize, verify your SG rules actually exist in AWS with aws ec2 describe-security-group-rules.


2. Secrets Manager (n8n-secrets.tf)

Three secrets. The encryption key gets prevent_destroy because losing it makes all stored credentials in the n8n database permanently unrecoverable.

# N8N_ENCRYPTION_KEY — protects all credentials stored by n8n
resource "random_password" "n8n_encryption_key" {
  length  = 32
  special = false
}

resource "aws_secretsmanager_secret" "n8n_encryption_key" {
  name        = "${local.name_prefix}/n8n/encryption-key"
  description = "n8n encryption key — protects all credentials in the n8n database"

  lifecycle {
    prevent_destroy = true  # CRITICAL: never delete this
  }
}

resource "aws_secretsmanager_secret_version" "n8n_encryption_key" {
  secret_id     = aws_secretsmanager_secret.n8n_encryption_key.id
  secret_string = jsonencode({ key = random_password.n8n_encryption_key.result })
}

# Database credentials
resource "random_password" "n8n_db_password" {
  length  = 32
  special = false
}

resource "aws_secretsmanager_secret" "n8n_db_credentials" {
  name = "${local.name_prefix}/n8n/db-credentials"

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_secretsmanager_secret_version" "n8n_db_credentials" {
  secret_id     = aws_secretsmanager_secret.n8n_db_credentials.id
  secret_string = jsonencode({ password = random_password.n8n_db_password.result })
}

# OIDC client credentials — populated manually from your IdP console after apply
resource "aws_secretsmanager_secret" "n8n_oidc" {
  name        = "${local.name_prefix}/n8n/oidc-client-secret"
  description = "n8n OIDC client credentials — populate after IdP apply: {client_id, client_secret}"
}
Enter fullscreen mode Exit fullscreen mode

3. IAM roles (n8n-iam.tf)

Two roles: one for the ECS agent (pull images, write logs, read secrets), one for the n8n application (access S3). Scoped to only the n8n secret paths.

# Task Execution Role — used by ECS agent
resource "aws_iam_role" "n8n_task_execution" {
  name = "${local.name_prefix}-n8n-ecsTaskExecutionRole"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect    = "Allow"
      Principal = { Service = "ecs-tasks.amazonaws.com" }
      Action    = "sts:AssumeRole"
    }]
  })
}

resource "aws_iam_role_policy_attachment" "n8n_task_execution_managed" {
  role       = aws_iam_role.n8n_task_execution.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

# Scope secrets access to only n8n paths
resource "aws_iam_role_policy" "n8n_task_execution_secrets" {
  name = "${local.name_prefix}-n8n-secrets"
  role = aws_iam_role.n8n_task_execution.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Action = ["secretsmanager:GetSecretValue"]
      Resource = [
        "arn:aws:secretsmanager:${var.region}:${data.aws_caller_identity.current.account_id}:secret:${local.name_prefix}/n8n/*"
      ]
    }]
  })
}

# Task Role — used by the n8n application itself (S3 binary data)
resource "aws_iam_role" "n8n_task" {
  name = "${local.name_prefix}-n8n-ecsTaskRole"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect    = "Allow"
      Principal = { Service = "ecs-tasks.amazonaws.com" }
      Action    = "sts:AssumeRole"
    }]
  })
}

resource "aws_iam_role_policy" "n8n_task_s3" {
  name = "${local.name_prefix}-n8n-s3"
  role = aws_iam_role.n8n_task.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"]
        Resource = "${module.n8n_s3_binary.arn}/*"
      },
      {
        Effect   = "Allow"
        Action   = ["s3:ListBucket"]
        Resource = module.n8n_s3_binary.arn
      }
    ]
  })
}
Enter fullscreen mode Exit fullscreen mode

4. ALB — shared listener, new target group

n8n shares the existing ALB from our existing tooling environment. The key additions are:

  1. A new ACM certificate added to the HTTPS listener's additional_certificate_arns
  2. A host-header–based listener rule routing n8n.example.com to the new target group
  3. The new target group pointing to ECS port 5678
module "alb" {
  source  = "terraform-aws-modules/alb/aws"
  version = "~> 10.0"

  # ... existing config ...

  listeners = {
    https = {
      port            = 443
      protocol        = "HTTPS"
      ssl_policy      = "ELBSecurityPolicy-TLS13-1-2-Res-PQ-2025-09"
      certificate_arn = module.existing_acm.validated_certificate_arn

      # Add n8n's cert to the same listener
      additional_certificate_arns = [module.n8n_acm.validated_certificate_arn]

      # Default forward (existing service)
      forward = { target_group_key = "existing_service" }

      rules = {
        n8n = {
          priority = 10
          actions  = [{ forward = { target_group_key = "n8n_ecs" } }]
          conditions = [{ host_header = { values = ["n8n.example.com"] } }]
        }
      }
    }
  }

  target_groups = {
    # ... existing target groups ...

    n8n_ecs = {
      backend_protocol     = "HTTP"
      backend_port         = 5678
      target_type          = "ip"
      deregistration_delay = 10

      health_check = {
        enabled             = true
        path                = "/healthz"
        matcher             = "200"
        interval            = 30
        timeout             = 5
        healthy_threshold   = 2
        unhealthy_threshold = 3
      }

      create_attachment = false
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The ACM certificate uses DNS validation via Cloudflare (this is a internal module I've wrote to make our life easier, I can share the code here if you guys needs it):

# n8n-alb.tf
module "n8n_acm" {
  source = "..."  # your ACM module with Cloudflare DNS validation

  domain_name         = "n8n.example.com"
  dns_provider        = "cloudflare"
  cloudflare_zone_id  = var.cloudflare_zone_id
  wait_for_validation = true
}
Enter fullscreen mode Exit fullscreen mode

5. Cloudflare DNS (n8n-dns.tf)

resource "cloudflare_dns_record" "n8n_alb" {
  zone_id = var.cloudflare_zone_id
  name    = "n8n"
  type    = "CNAME"
  content = module.alb.dns_name
  ttl     = 60
  proxied = false  # MUST be false — see gotcha below
}
Enter fullscreen mode Exit fullscreen mode

Gotcha — proxied = true causes an infinite redirect loop: When the ALB terminates TLS, it receives plain HTTP from Cloudflare and responds with a redirect to HTTPS. Cloudflare's proxy then re-sends HTTP, creating an infinite loop. Always use proxied = false for records pointing to AWS ALBs that handle their own TLS termination.


6. ECS Task Definition (n8n-ecs.tf) — the interesting part

This is where most of the complexity lives. n8n requires a hooks.js file to be present before it starts (for OIDC SSO). The file can't be baked into the image, and we can't inject it via entryPoint override.

Why not override entryPoint?

// DON'T DO THIS
"entryPoint": ["/bin/sh", "-c"],
"command": ["wget ... /home/node/.n8n/hooks/hooks.js && n8n start"]
Enter fullscreen mode Exit fullscreen mode

This replaces the container's configured shell with a bare /bin/sh that doesn't inherit the image's PATH. The n8n binary lives at a path configured by the image — a bare shell can't find it. You get Command "n8n" not found and the task exits.

The correct pattern: init container with a shared ephemeral volume

resource "aws_ecs_task_definition" "n8n" {
  family                   = "${local.name_prefix}-n8n"
  requires_compatibilities = ["FARGATE"]
  network_mode             = "awsvpc"
  cpu                      = 1024
  memory                   = 2048
  execution_role_arn       = aws_iam_role.n8n_task_execution.arn
  task_role_arn            = aws_iam_role.n8n_task.arn

  # Ephemeral volume shared between init container and n8n
  volume {
    name = "n8n-hooks"
  }

  container_definitions = jsonencode([
    # ── Init container ──────────────────────────────────────────────────
    # essential=false: its exit (0) does NOT stop the task.
    # It downloads hooks.js into the shared volume, then exits.
    {
      name      = "hooks-init"
      image     = "alpine:3.21"
      essential = false

      command = [
        "/bin/sh", "-c",
        "wget -q --tries=3 --timeout=30 -O /hooks/hooks.js https://raw.githubusercontent.com/cweagans/n8n-oidc/main/hooks.js && echo 'hooks.js downloaded OK'"
      ]

      mountPoints = [{
        sourceVolume  = "n8n-hooks"
        containerPath = "/hooks"
        readOnly      = false
      }]

      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.n8n.name
          "awslogs-region"        = var.region
          "awslogs-stream-prefix" = "hooks-init"
        }
      }
    },

    # ── n8n application container ────────────────────────────────────────
    {
      name      = "n8n"
      image     = "n8nio/n8n:2.10.1"  # always pin — never use latest
      essential = true

      readonlyRootFilesystem = false  # n8n requires a writable filesystem

      # n8n starts only AFTER hooks-init exits successfully
      dependsOn = [{
        containerName = "hooks-init"
        condition     = "COMPLETE"
      }]

      # Mount hooks.js from the shared volume, read-only
      mountPoints = [{
        sourceVolume  = "n8n-hooks"
        containerPath = "/home/node/.n8n/hooks"
        readOnly      = true
      }]

      portMappings = [{ containerPort = 5678, protocol = "tcp" }]

      logConfiguration = {
        logDriver = "awslogs"
        options = {
          "awslogs-group"         = aws_cloudwatch_log_group.n8n.name
          "awslogs-region"        = var.region
          "awslogs-stream-prefix" = "ecs"
        }
      }

      environment = [
        # Database
        { name = "DB_TYPE",                              value = "postgresdb" },
        { name = "DB_POSTGRESDB_HOST",                   value = aws_db_instance.shared.address },
        { name = "DB_POSTGRESDB_PORT",                   value = "5432" },
        { name = "DB_POSTGRESDB_DATABASE",               value = "n8n" },
        { name = "DB_POSTGRESDB_USER",                   value = "n8n" },
        # PostgreSQL 17 on RDS enforces SSL — both vars required
        { name = "DB_POSTGRESDB_SSL_ENABLED",            value = "true" },
        { name = "DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED", value = "false" },

        # Host / protocol
        # IMPORTANT: N8N_PROTOCOL must be "http" — ALB terminates SSL
        # and forwards plain HTTP to the container. Setting this to "https"
        # causes n8n to redirect every request → infinite loop.
        { name = "N8N_HOST",      value = "n8n.example.com" },
        { name = "N8N_PROTOCOL",  value = "http" },
        { name = "WEBHOOK_URL",   value = "https://n8n.example.com/" },

        # Security
        { name = "N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS", value = "true" },
        { name = "N8N_SECURE_COOKIE",                     value = "true" },
        { name = "N8N_RUNNERS_ENABLED",                   value = "true" },

        # Data retention
        { name = "EXECUTIONS_DATA_PRUNE",   value = "true" },
        { name = "EXECUTIONS_DATA_MAX_AGE", value = "168" },  # 7 days

        # Timezone
        { name = "GENERIC_TIMEZONE", value = "UTC" },
        { name = "TZ",               value = "UTC" },

        # Binary data — "s3" mode requires Enterprise license
        { name = "N8N_BINARY_DATA_MODE", value = "filesystem" },

        # OIDC hooks — cweagans/n8n-oidc
        { name = "EXTERNAL_HOOK_FILES",          value = "/home/node/.n8n/hooks/hooks.js" },
        { name = "EXTERNAL_FRONTEND_HOOKS_URLS", value = "/assets/oidc-frontend-hook.js" },
        { name = "N8N_ADDITIONAL_NON_UI_ROUTES", value = "auth" },
        { name = "OIDC_ISSUER_URL",              value = "https://your-idp.example.com" },
        { name = "OIDC_REDIRECT_URI",            value = "https://n8n.example.com/auth/oidc/callback" },
      ]

      secrets = [
        {
          name      = "DB_POSTGRESDB_PASSWORD"
          valueFrom = "${aws_secretsmanager_secret.n8n_db_credentials.arn}:password::"
        },
        {
          name      = "N8N_ENCRYPTION_KEY"
          valueFrom = "${aws_secretsmanager_secret.n8n_encryption_key.arn}:key::"
        },
        {
          name      = "OIDC_CLIENT_ID"
          valueFrom = "${aws_secretsmanager_secret.n8n_oidc.arn}:client_id::"
        },
        {
          name      = "OIDC_CLIENT_SECRET"
          valueFrom = "${aws_secretsmanager_secret.n8n_oidc.arn}:client_secret::"
        },
      ]
    }
  ])
}

resource "aws_ecs_service" "n8n" {
  name            = "${local.name_prefix}-n8n"
  cluster         = aws_ecs_cluster.this.id
  task_definition = aws_ecs_task_definition.n8n.arn
  desired_count   = 1
  launch_type     = "FARGATE"

  # n8n is NOT horizontally scalable — ignore external desired_count changes
  lifecycle {
    ignore_changes = [desired_count]
  }

  network_configuration {
    subnets          = data.aws_subnets.private.ids
    security_groups  = [aws_security_group.n8n_ecs.id]
    assign_public_ip = false
  }

  load_balancer {
    target_group_arn = module.alb.target_groups["n8n_ecs"].arn
    container_name   = "n8n"
    container_port   = 5678
  }

  health_check_grace_period_seconds = 120

  deployment_circuit_breaker {
    enable   = true
    rollback = true
  }
}
Enter fullscreen mode Exit fullscreen mode

SSO setup: Okta OIDC via cweagans/n8n-oidc

n8n's built-in SSO requires an Enterprise license. The community cweagans/n8n-oidc project provides a hooks-based OIDC implementation that works on Community Edition.

The Okta application (Terraform)

resource "okta_app_oauth" "n8n" {
  label  = "n8n"
  status = "ACTIVE"
  type   = "web"

  grant_types    = ["authorization_code"]
  response_types = ["code"]

  # n8n-oidc uses /auth/oidc/callback — NOT /rest/sso/oidc/callback
  redirect_uris = ["https://n8n.example.com/auth/oidc/callback"]

  consent_method             = "REQUIRED"
  issuer_mode                = "ORG_URL"
  token_endpoint_auth_method = "client_secret_basic"

  # IMPORTANT: n8n-oidc does NOT implement PKCE
  pkce_required = false

  omit_secret            = false
  refresh_token_rotation = "STATIC"
  hide_ios               = true
  hide_web               = true

  # Assign to your authentication policy
  authentication_policy   = var.n8n_auth_policy_id
  user_name_template      = "$${source.login}"
  user_name_template_type = "BUILT_IN"
}
Enter fullscreen mode Exit fullscreen mode

Group assignment (assign specific groups):

# In your app group assignment locals/module
n8n = {
  app_id = okta_app_oauth.n8n.id
  groups = [
    okta_group.groups["engineering"].id,
    okta_group.groups["operations"].id,
  ]
}
Enter fullscreen mode Exit fullscreen mode

Two things that will bite you if you copy from a different OIDC integration:

  • The redirect URI is /auth/oidc/callback, not /rest/sso/oidc/callback (the built-in Enterprise SSO path)
  • pkce_required = false — the community library doesn't implement PKCE; setting it to true will cause authentication failures that are very hard to debug

After applying the Okta Terraform, copy the client_id and client_secret from the Okta console into the Secrets Manager secret:

aws secretsmanager put-secret-value \
  --secret-id "your-prefix/n8n/oidc-client-secret" \
  --secret-string '{"client_id":"<from-okta>","client_secret":"<from-okta>"}'
Enter fullscreen mode Exit fullscreen mode

RDS bootstrap

n8n shares the existing PostgreSQL instance. The database and user need to be created manually (n8n doesn't auto-create databases). There's a quirk with RDS permissions:

-- This FAILS on RDS (admin can't SET ROLE to newly created user)
CREATE DATABASE n8n OWNER n8n;

-- This WORKS:
CREATE USER n8n WITH PASSWORD '<password from Secrets Manager>';
CREATE DATABASE n8n;  -- owned by admin
ALTER DATABASE n8n OWNER TO n8n;
Enter fullscreen mode Exit fullscreen mode

Also: PostgreSQL 17 on RDS enforces SSL for all connections. The n8n env vars for this are not what you'd guess:

# WRONG (not a valid n8n env var)
DB_POSTGRESDB_SSL=true

# CORRECT
DB_POSTGRESDB_SSL_ENABLED=true
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false
Enter fullscreen mode Exit fullscreen mode

The second var is needed because the RDS CA certificate isn't in Node.js's default CA bundle.


N8N_PROTOCOL and the redirect loop trap

This one is subtle. If you set N8N_PROTOCOL=https (which seems correct since your site is HTTPS), n8n will redirect every incoming HTTP request to HTTPS. But the ALB always sends plain HTTP to the container after terminating TLS. Result: infinite redirect loop.

The correct configuration when you're behind a TLS-terminating proxy:

N8N_PROTOCOL=http          # What the container actually receives
WEBHOOK_URL=https://n8n.example.com/  # What n8n uses to generate public URLs
Enter fullscreen mode Exit fullscreen mode

Deployment order

Multi-repo Terraform changes require explicit sequencing. The OIDC application and the infrastructure live in different repositories (and in our case, different AWS accounts):

1. Apply IdP Terraform (okta-management or equivalent)
   → Creates OIDC application
   → Retrieve client_id and client_secret from IdP console

2. Populate OIDC secret in Secrets Manager
   → aws secretsmanager put-secret-value ...

3. Apply infrastructure Terraform (this repo)
   → Security groups
   → RDS ingress rules
   → Secrets Manager secrets (encryption key + DB creds auto-generated)
   → IAM roles
   → ACM certificate (DNS validated, ~2 min)
   → ALB target group + listener rule
   → ECS task definition + service
   → Cloudflare DNS CNAME

4. Bootstrap RDS (one-time)
   → CREATE USER n8n ...
   → CREATE DATABASE n8n; ALTER DATABASE n8n OWNER TO n8n;

5. First login
   → Visit n8n URL, create owner account (local, pre-SSO)
   → Settings → SSO → OIDC → configure with Okta issuer URL
   → Enforce SSO (disables local login)
Enter fullscreen mode Exit fullscreen mode

Cost breakdown

Sharing resources makes a significant difference:

Resource Cost/month
ECS Fargate (1 vCPU / 2 GB, ~730h) ~$35
Shared RDS PostgreSQL (incremental) ~$5
NAT Gateway (fixed + data) ~$40
Shared ALB (incremental) ~$5
S3 + Secrets Manager ~$2
Total ~$87/month
Dedicated ALB + RDS (alternative) +$48/month

Lessons learned

Things that look right but aren't

  1. N8N_PROTOCOL=https — Set it to http when behind a TLS-terminating load balancer. Use WEBHOOK_URL for the public HTTPS address.

  2. proxied=true in Cloudflare — Creates an infinite redirect loop. Always proxied=false when the ALB terminates TLS.

  3. CREATE DATABASE n8n OWNER n8n — Fails silently on RDS. Use two separate statements.

  4. DB_POSTGRESDB_SSL=true — Not a valid env var. Use DB_POSTGRESDB_SSL_ENABLED=true + DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false.

  5. Overriding ECS entryPoint to run pre-start scripts — Breaks PATH resolution, n8n binary not found. Use a dependsOn: COMPLETE init container with a shared volume instead.

  6. OIDC redirect URI — Use /auth/oidc/callback (community SSO path), not /rest/sso/oidc/callback (Enterprise SSO path).

  7. pkce_required=true in Okta — The community n8n-oidc library doesn't implement PKCE. Leave it false.

The init container pattern is reusable

Whenever you need to inject a file into an ECS Fargate container before startup (and you can't bake it into the image), this is the pattern:

Init container (essential=false):
  - Runs Alpine or BusyBox
  - Downloads/generates the file into a named shared volume
  - Exits 0

Main container:
  - dependsOn: [{condition: "COMPLETE"}]
  - Mounts the volume read-only at the expected path
Enter fullscreen mode Exit fullscreen mode

This preserves the image's entrypoint and PATH configuration, which is critical for images like n8n that expect a specific runtime environment.

Audit for shared resources before provisioning new ones

Before creating a dedicated ALB, RDS instance, or any other expensive resource, check what's already running. In our case, auditing the existing tooling environment saved ~$48/month. Make this a standard step in your deploy planning for any new ECS service.


The complete file list

For reference, here's every file that was created or modified:

terraform/
├── alb.tf                  # MODIFIED: added n8n target group, listener rule, additional cert
├── n8n-alb.tf              # NEW: ACM certificate module for n8n.example.com
├── n8n-dns.tf              # NEW: Cloudflare CNAME → ALB
├── n8n-ecs.tf              # NEW: task definition (init + app containers) + ECS service
├── n8n-iam.tf              # NEW: task execution role + task role + policies
├── n8n-rds.tf              # NEW: comment + manual bootstrap instructions
├── n8n-s3.tf               # NEW: binary data bucket (future Enterprise use)
├── n8n-secrets.tf          # NEW: encryption key, DB creds, OIDC client secret
└── n8n-sg.tf               # NEW: ECS SG, RDS ingress rule, VPC endpoint rule

okta-management/
├── apps.tf                 # MODIFIED: added okta_app_oauth.n8n
└── locals.tf               # MODIFIED: added n8n to app_group_assignments
Enter fullscreen mode Exit fullscreen mode

If you're self-hosting n8n on AWS, hopefully this saves you the debugging cycles we went through. The init container SSO pattern in particular was the least obvious part — there's very little documentation on how to do file injection in ECS Fargate without breaking the container's runtime environment.

Happy automating.

Top comments (0)