El pipeline de frontend promedio hace tres cosas: build, tests unitarios, deploy. Eso no es suficiente para producción seria. Falta visual regression, E2E contra un ambiente real, canary deployments, y rollback automático si algo sale mal. Este artículo es el pipeline completo que uso, con CodePipeline, CodeBuild, Playwright y métricas de CloudWatch conectadas para decidir promoción automática.
El pipeline objetivo
flowchart TB
Source[GitHub push] --> Build[CodeBuild<br/>build + unit tests]
Build --> Deploy1[Deploy staging]
Deploy1 --> E2E[CodeBuild<br/>Playwright E2E]
E2E --> Visual[CodeBuild<br/>Visual regression]
Visual --> Lighthouse[CodeBuild<br/>Lighthouse CI]
Lighthouse --> Gate{Métricas OK?}
Gate -->|Sí| Canary[Deploy canary 10% prod]
Gate -->|No| Fail[Abort pipeline]
Canary --> Monitor[CloudWatch monitoring]
Monitor --> Decision{Errores<br/>aceptables?}
Decision -->|Sí| Full[Deploy full prod]
Decision -->|No| Rollback[Auto rollback]
style Canary fill:#f39c12,color:#fff
style Full fill:#2ecc71,color:#fff
style Rollback fill:#e74c3c,color:#fff
Cada paso es una gate. Si algo falla, el pipeline se detiene. Esto es el opuesto de los pipelines lineales "push and pray".
Stack CDK del pipeline
// infra/pipeline-stack.ts
import * as cdk from 'aws-cdk-lib';
import * as codepipeline from 'aws-cdk-lib/aws-codepipeline';
import * as actions from 'aws-cdk-lib/aws-codepipeline-actions';
import * as codebuild from 'aws-cdk-lib/aws-codebuild';
import * as s3 from 'aws-cdk-lib/aws-s3';
import * as iam from 'aws-cdk-lib/aws-iam';
export class FrontendPipelineStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const artifactBucket = new s3.Bucket(this, 'ArtifactBucket', {
encryption: s3.BucketEncryption.S3_MANAGED,
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
lifecycleRules: [
{ id: 'expire-old', expiration: cdk.Duration.days(30) },
],
});
const sourceOutput = new codepipeline.Artifact('Source');
const buildOutput = new codepipeline.Artifact('Build');
const e2eOutput = new codepipeline.Artifact('E2E');
// Source: GitHub via CodeConnections
const sourceAction = new actions.CodeStarConnectionsSourceAction({
actionName: 'GitHub_Source',
owner: 'miempresa',
repo: 'frontend',
branch: 'main',
connectionArn: process.env.CODESTAR_CONNECTION_ARN!,
output: sourceOutput,
triggerOnPush: true,
});
// Build project
const buildProject = new codebuild.PipelineProject(this, 'BuildProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_7_0,
computeType: codebuild.ComputeType.MEDIUM,
privileged: false,
},
cache: codebuild.Cache.local(
codebuild.LocalCacheMode.DOCKER_LAYER,
codebuild.LocalCacheMode.SOURCE,
codebuild.LocalCacheMode.CUSTOM
),
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec.yml'),
timeout: cdk.Duration.minutes(15),
});
const buildAction = new actions.CodeBuildAction({
actionName: 'Build_And_Test',
project: buildProject,
input: sourceOutput,
outputs: [buildOutput],
});
// E2E tests project
const e2eProject = new codebuild.PipelineProject(this, 'E2EProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_7_0,
computeType: codebuild.ComputeType.LARGE,
},
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec-e2e.yml'),
timeout: cdk.Duration.minutes(30),
environmentVariables: {
STAGING_URL: { value: 'https://staging.miempresa.com' },
},
});
const e2eAction = new actions.CodeBuildAction({
actionName: 'Playwright_E2E',
project: e2eProject,
input: sourceOutput,
outputs: [e2eOutput],
});
// Lighthouse CI
const lighthouseProject = new codebuild.PipelineProject(this, 'LighthouseProject', {
environment: {
buildImage: codebuild.LinuxBuildImage.STANDARD_7_0,
computeType: codebuild.ComputeType.MEDIUM,
},
buildSpec: codebuild.BuildSpec.fromSourceFilename('buildspec-lighthouse.yml'),
});
const lighthouseAction = new actions.CodeBuildAction({
actionName: 'Lighthouse_CI',
project: lighthouseProject,
input: sourceOutput,
});
// Pipeline principal
new codepipeline.Pipeline(this, 'Pipeline', {
pipelineName: 'frontend-pipeline',
artifactBucket,
pipelineType: codepipeline.PipelineType.V2,
stages: [
{ stageName: 'Source', actions: [sourceAction] },
{ stageName: 'Build', actions: [buildAction] },
{
stageName: 'DeployStaging',
actions: [
new actions.S3DeployAction({
actionName: 'Deploy_S3',
bucket: stagingBucket,
input: buildOutput,
extract: true,
}),
],
},
{
stageName: 'Quality',
actions: [e2eAction, lighthouseAction], // paralelo
},
{
stageName: 'DeployProductionCanary',
actions: [
new actions.LambdaInvokeAction({
actionName: 'Canary_Deploy',
lambda: canaryDeployFunction,
userParameters: { percentage: 10 },
}),
],
},
{
stageName: 'Monitor',
actions: [
new actions.LambdaInvokeAction({
actionName: 'Monitor_Canary',
lambda: monitorCanaryFunction,
userParameters: { durationMinutes: 15 },
}),
],
},
{
stageName: 'DeployProduction',
actions: [
new actions.S3DeployAction({
actionName: 'Full_Deploy',
bucket: productionBucket,
input: buildOutput,
extract: true,
}),
],
},
],
});
}
}
buildspec.yml: build + unit tests
version: 0.2
phases:
install:
runtime-versions:
nodejs: 20
commands:
- npm install -g pnpm@9
- pnpm config set store-dir /tmp/.pnpm-store
pre_build:
commands:
- pnpm install --frozen-lockfile
- pnpm run lint
- pnpm run typecheck
build:
commands:
- pnpm run build
- pnpm run test:unit -- --coverage
post_build:
commands:
- echo "Uploading coverage to S3..."
- aws s3 sync coverage/ s3://$ARTIFACT_BUCKET/coverage/$CODEBUILD_RESOLVED_SOURCE_VERSION/
reports:
jest:
files:
- 'coverage/junit.xml'
file-format: JUNITXML
coverage:
files:
- 'coverage/clover.xml'
file-format: CLOVERXML
artifacts:
files:
- 'dist/**/*'
base-directory: .
cache:
paths:
- '/tmp/.pnpm-store/**/*'
- 'node_modules/**/*'
- '.next/cache/**/*'
buildspec-e2e.yml: Playwright
version: 0.2
phases:
install:
runtime-versions:
nodejs: 20
commands:
- npm install -g pnpm@9
- pnpm install --frozen-lockfile
- pnpm exec playwright install --with-deps chromium firefox webkit
pre_build:
commands:
- echo "Waiting for staging to be ready..."
- |
for i in {1..30}; do
if curl -f $STAGING_URL/api/health; then
echo "Staging ready"
break
fi
echo "Waiting..."
sleep 10
done
build:
commands:
- pnpm exec playwright test --reporter=html,junit,json
post_build:
commands:
- aws s3 sync playwright-report/ s3://$ARTIFACT_BUCKET/playwright-reports/$CODEBUILD_RESOLVED_SOURCE_VERSION/
- aws s3 sync test-results/ s3://$ARTIFACT_BUCKET/playwright-results/$CODEBUILD_RESOLVED_SOURCE_VERSION/
- echo "Report URL https://$ARTIFACT_BUCKET.s3.amazonaws.com/playwright-reports/$CODEBUILD_RESOLVED_SOURCE_VERSION/index.html"
reports:
playwright:
files:
- 'test-results/results.xml'
file-format: JUNITXML
artifacts:
files:
- 'playwright-report/**/*'
- 'test-results/**/*'
Playwright config para producción
// playwright.config.ts
import { defineConfig, devices } from '@playwright/test';
export default defineConfig({
testDir: './e2e',
timeout: 60_000,
expect: { timeout: 10_000 },
fullyParallel: true,
forbidOnly: !!process.env.CI,
retries: process.env.CI ? 2 : 0,
workers: process.env.CI ? 4 : undefined,
reporter: [
['html', { open: 'never' }],
['junit', { outputFile: 'test-results/results.xml' }],
['json', { outputFile: 'test-results/results.json' }],
process.env.CI ? ['github'] : ['list'],
],
use: {
baseURL: process.env.STAGING_URL || 'http://localhost:3000',
trace: 'on-first-retry',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
viewport: { width: 1280, height: 720 },
actionTimeout: 15_000,
navigationTimeout: 30_000,
},
projects: [
{ name: 'chromium', use: { ...devices['Desktop Chrome'] } },
{ name: 'firefox', use: { ...devices['Desktop Firefox'] } },
{ name: 'webkit', use: { ...devices['Desktop Safari'] } },
{ name: 'mobile-chrome', use: { ...devices['Pixel 7'] } },
{ name: 'mobile-safari', use: { ...devices['iPhone 14'] } },
],
});
Tests E2E bien estructurados
El principio: page objects + fixtures para evitar duplicación:
// e2e/pages/checkout-page.ts
import { Page, Locator, expect } from '@playwright/test';
export class CheckoutPage {
readonly emailInput: Locator;
readonly nameInput: Locator;
readonly addressInput: Locator;
readonly cardNumberInput: Locator;
readonly submitButton: Locator;
readonly successMessage: Locator;
constructor(public readonly page: Page) {
this.emailInput = page.getByLabel('Email');
this.nameInput = page.getByLabel('Nombre completo');
this.addressInput = page.getByLabel('Dirección');
this.cardNumberInput = page.getByLabel('Número de tarjeta');
this.submitButton = page.getByRole('button', { name: 'Confirmar compra' });
this.successMessage = page.getByText('Pedido confirmado');
}
async goto() {
await this.page.goto('/checkout');
await expect(this.page).toHaveTitle(/Checkout/);
}
async fillShippingInfo(data: {
email: string;
name: string;
address: string;
}) {
await this.emailInput.fill(data.email);
await this.nameInput.fill(data.name);
await this.addressInput.fill(data.address);
}
async fillPayment(cardNumber: string) {
await this.cardNumberInput.fill(cardNumber);
}
async submit() {
await Promise.all([
this.page.waitForResponse(
resp => resp.url().includes('/api/orders') && resp.status() === 200
),
this.submitButton.click(),
]);
}
async expectSuccess() {
await expect(this.successMessage).toBeVisible({ timeout: 15_000 });
}
}
// e2e/checkout.spec.ts
import { test, expect } from '@playwright/test';
import { CheckoutPage } from './pages/checkout-page';
import { ProductPage } from './pages/product-page';
test.describe('Flujo de checkout completo', () => {
test('usuario nuevo puede completar compra', async ({ page }) => {
const product = new ProductPage(page);
await product.goto('sku-12345');
await product.addToCart();
const checkout = new CheckoutPage(page);
await checkout.goto();
await checkout.fillShippingInfo({
email: `test+${Date.now()}@example.com`,
name: 'Juan Pérez',
address: 'Calle 123 #45-67, Bogotá',
});
await checkout.fillPayment('4242 4242 4242 4242');
await checkout.submit();
await checkout.expectSuccess();
});
test('validación de email inválido', async ({ page }) => {
const checkout = new CheckoutPage(page);
await checkout.goto();
await checkout.emailInput.fill('no-es-email');
await checkout.submitButton.click();
await expect(page.getByText('Email inválido')).toBeVisible();
});
});
Visual regression con Playwright
Playwright tiene soporte nativo para comparación visual:
// e2e/visual.spec.ts
import { test, expect } from '@playwright/test';
test.describe('Visual regression', () => {
const pages = [
{ path: '/', name: 'home' },
{ path: '/products', name: 'product-list' },
{ path: '/products/sku-12345', name: 'product-detail' },
{ path: '/about', name: 'about' },
];
for (const { path, name } of pages) {
test(`snapshot ${name} - desktop`, async ({ page }) => {
await page.goto(path);
await page.waitForLoadState('networkidle');
// Ocultar elementos no determinísticos
await page.addStyleTag({
content: `
[data-timestamp], .animated, .carousel { visibility: hidden !important; }
`,
});
await expect(page).toHaveScreenshot(`${name}-desktop.png`, {
fullPage: true,
animations: 'disabled',
maxDiffPixelRatio: 0.01,
});
});
}
});
Lighthouse CI
# buildspec-lighthouse.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 20
commands:
- npm install -g @lhci/cli
build:
commands:
- lhci autorun --config=.lighthouserc.js
reports:
lighthouse:
files:
- '.lighthouseci/**/*.json'
// .lighthouserc.js
module.exports = {
ci: {
collect: {
url: [
'https://staging.miempresa.com/',
'https://staging.miempresa.com/products',
'https://staging.miempresa.com/checkout',
],
numberOfRuns: 3,
settings: {
chromeFlags: '--no-sandbox --headless',
preset: 'desktop',
},
},
assert: {
assertions: {
'categories:performance': ['error', { minScore: 0.9 }],
'categories:accessibility': ['error', { minScore: 0.95 }],
'categories:best-practices': ['error', { minScore: 0.9 }],
'categories:seo': ['error', { minScore: 0.9 }],
'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],
'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
'total-blocking-time': ['error', { maxNumericValue: 300 }],
},
},
upload: {
target: 'temporary-public-storage',
},
},
};
Canary deployment
La Lambda que maneja el canary:
// lambda/canary-deploy.ts
import {
CloudFrontClient,
GetDistributionConfigCommand,
UpdateDistributionCommand,
} from '@aws-sdk/client-cloudfront';
const cf = new CloudFrontClient({});
export const handler = async (event: any) => {
const { userParameters } = event;
const percentage = parseInt(JSON.parse(userParameters).percentage, 10);
const distId = process.env.PROD_DISTRIBUTION_ID!;
const { DistributionConfig, ETag } = await cf.send(
new GetDistributionConfigCommand({ Id: distId })
);
// Lambda@Edge function para routing con weight
const originGroup = DistributionConfig!.OriginGroups!.Items![0];
originGroup.Members!.Items = [
{
OriginId: 'current-version',
// Lambda function decide 90/10 based on header
},
{
OriginId: 'new-version',
},
];
// Guardar el percentage como tag para la Lambda@Edge
await cf.send(
new UpdateDistributionCommand({
Id: distId,
DistributionConfig,
IfMatch: ETag,
})
);
return {
statusCode: 200,
canaryPercentage: percentage,
};
};
Monitor del canary
// lambda/monitor-canary.ts
import {
CloudWatchClient,
GetMetricStatisticsCommand,
} from '@aws-sdk/client-cloudwatch';
const cw = new CloudWatchClient({});
export const handler = async (event: any) => {
const { userParameters } = event;
const { durationMinutes } = JSON.parse(userParameters);
await new Promise(resolve => setTimeout(resolve, durationMinutes * 60_000));
const endTime = new Date();
const startTime = new Date(endTime.getTime() - durationMinutes * 60_000);
// Get 5xx error rate for canary
const errors = await cw.send(
new GetMetricStatisticsCommand({
Namespace: 'AWS/CloudFront',
MetricName: '5xxErrorRate',
Dimensions: [
{ Name: 'DistributionId', Value: process.env.PROD_DISTRIBUTION_ID },
{ Name: 'Region', Value: 'Global' },
],
StartTime: startTime,
EndTime: endTime,
Period: 300,
Statistics: ['Average'],
})
);
const avgErrorRate = errors.Datapoints?.reduce((sum, d) => sum + (d.Average ?? 0), 0) ?? 0;
const normalizedErrorRate = avgErrorRate / (errors.Datapoints?.length ?? 1);
const THRESHOLD = 0.5; // 0.5% errors = fail
if (normalizedErrorRate > THRESHOLD) {
throw new Error(
`Canary failed: error rate ${normalizedErrorRate.toFixed(2)}% exceeds ${THRESHOLD}%`
);
}
return {
statusCode: 200,
errorRate: normalizedErrorRate,
threshold: THRESHOLD,
passed: true,
};
};
Notificaciones en Slack
// lambda/pipeline-notifier.ts
import https from 'node:https';
export const handler = async (event: any) => {
const detail = event.detail;
const color = detail.state === 'SUCCEEDED' ? 'good' :
detail.state === 'FAILED' ? 'danger' : 'warning';
const message = {
attachments: [
{
color,
title: `Pipeline ${detail.state}: ${detail.pipeline}`,
fields: [
{ title: 'Stage', value: detail.stage, short: true },
{ title: 'Action', value: detail.action, short: true },
{ title: 'Commit', value: detail['execution-id'], short: false },
],
footer: 'CodePipeline',
ts: Math.floor(Date.now() / 1000),
},
],
};
await postToSlack(message);
};
async function postToSlack(message: any) {
return new Promise((resolve, reject) => {
const req = https.request(process.env.SLACK_WEBHOOK_URL!, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
}, (res) => {
let data = '';
res.on('data', chunk => data += chunk);
res.on('end', () => resolve(data));
});
req.on('error', reject);
req.write(JSON.stringify(message));
req.end();
});
}
Lo que aprendí construyendo este pipeline
1. El orden importa.
Unit tests antes de deploy. E2E antes de canary. Nunca empujes a producción algo que no pasó todas las gates.
2. Los tests flaky matan la confianza.
Un test E2E que falla aleatoriamente hace que el equipo empiece a hacer retries hasta que pase. En ese momento, el pipeline es teatro. Arregla o borra los tests flaky.
3. Visual regression requiere disciplina.
Las comparaciones visuales fallan por cambios legítimos (fecha, dato dinámico). Hay que tener un proceso claro para aceptar snapshots nuevos.
4. Canary sin monitoreo es deploy a ciegas.
El canary solo sirve si tienes métricas sólidas de qué es "normal". Sin eso, no sabes si el 0.3% de errores era siempre así o es nuevo.
5. Los pipelines se diseñan para el peor día.
Pensar en happy path está bien para empezar. Pero el pipeline prueba su valor cuando todo está ardiendo.
Cierre
Un pipeline maduro no es solo un script. Es una serie de gates que protegen tu producción. La inversión inicial (1-2 semanas) se paga la primera vez que el pipeline detecta algo antes de que llegue a producción. Con AWS, tienes todas las piezas; el trabajo es cosérelas bien.
En el próximo artículo: session management serio con Redis ElastiCache y Next.js, cómo escalar sesiones cuando cookies firmadas ya no alcanzan.
Top comments (0)