<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: juan jose orjuela</title>
    <description>The latest articles on DEV Community by juan jose orjuela (@jjoc007).</description>
    <link>https://dev.to/jjoc007</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jjoc007"/>
    <language>en</language>
    <item>
      <title>Modelos fundacionales y Amazon Bedrock: ajustando las perillas de la IA como si fuera un equipo de sonido</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Sun, 22 Mar 2026 03:19:52 +0000</pubDate>
      <link>https://dev.to/jjoc007/modelos-fundacionales-y-amazon-bedrock-ajustando-las-perillas-de-la-ia-como-si-fuera-un-equipo-de-2dne</link>
      <guid>https://dev.to/jjoc007/modelos-fundacionales-y-amazon-bedrock-ajustando-las-perillas-de-la-ia-como-si-fuera-un-equipo-de-2dne</guid>
      <description>&lt;p&gt;Cuando empecé a jugar con modelos de IA en AWS, me di cuenta rápido de algo: usar un modelo fundacional “en bruto” es como comprar un televisor 4K y nunca tocar los ajustes de imagen. Funciona, sí, pero te estás perdiendo la mejor parte.&lt;/p&gt;

&lt;p&gt;En este post quiero contarte, como desarrollador a desarrollador, qué son los modelos fundacionales, qué es Amazon Bedrock y, sobre todo, cómo usar los parámetros de inferencia (temperature, top‑p, top‑k, longitud, etc.) para que el modelo haga más exactamente lo que tú quieres. Nada de fórmulas mágicas, solo perillas que vale la pena entender.&lt;/p&gt;

&lt;h2&gt;
  
  
  ¿Qué es un modelo fundacional?
&lt;/h2&gt;

&lt;p&gt;Un modelo fundacional (Foundation Model, FM) es un modelo de IA muy grande, entrenado con cantidades ridículas de texto, imágenes y otros datos, para aprender patrones generales del mundo.&lt;/p&gt;

&lt;p&gt;Me gusta verlo así:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Es como alguien que se ha leído “todo Internet” y ahora puede escribir, resumir, traducir, razonar y hasta generar imágenes, sin que tú tengas que entrenarlo desde cero.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;En Amazon Bedrock tienes un “catálogo” de estos modelos de diferentes proveedores:&lt;br&gt;
Claude (Anthropic), Titan (Amazon), Llama (Meta), Mistral, etc.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Algunos son mejores escribiendo texto largo.&lt;/li&gt;
&lt;li&gt;Otros son más rápidos y ligeros para chat en tiempo real.&lt;/li&gt;
&lt;li&gt;Otros se especializan en imágenes o en generar embeddings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tú eliges el modelo como quien elige la herramienta en una caja de herramientas: para clavar clavos uso un martillo, no un destornillador.&lt;/p&gt;
&lt;h2&gt;
  
  
  ¿Qué es Amazon Bedrock?
&lt;/h2&gt;

&lt;p&gt;Amazon Bedrock es el servicio de AWS que te da acceso gestionado a esos modelos fundacionales a través de una API.&lt;/p&gt;

&lt;p&gt;AWS se encarga de:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Infraestructura (GPU, escalado, disponibilidad).&lt;/li&gt;
&lt;li&gt;Seguridad, autenticación, cuotas.&lt;/li&gt;
&lt;li&gt;Catálogo de modelos y nuevas versiones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tú solo te preocupas de:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elegir el modelo.&lt;/li&gt;
&lt;li&gt;Enviar prompts.&lt;/li&gt;
&lt;li&gt;Ajustar parámetros.&lt;/li&gt;
&lt;li&gt;Integrarlo en tu aplicación.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Piensa en Bedrock como un Netflix de modelos de IA: tú no gestionas los servidores de streaming; solo eliges qué contenido ver (qué modelo usar) y con qué calidad (parámetros).&lt;/p&gt;

&lt;p&gt;Los “controles remotos” de la IA: parámetros de inferencia&lt;br&gt;
Cuando llamas a un modelo en Bedrock, no solo le mandas el prompt; también puedes ajustar una serie de parámetros que afectan cómo responde.&lt;/p&gt;

&lt;p&gt;La metáfora que uso siempre es la de un equipo de sonido:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;El prompt es la canción.&lt;/li&gt;
&lt;li&gt;El modelo es el amplificador.&lt;/li&gt;
&lt;li&gt;Los parámetros son las perillas de graves, agudos y volumen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;La canción es la misma, pero si mueves las perillas, la experiencia cambia mucho.&lt;/p&gt;

&lt;p&gt;Temperature: cuánta creatividad quieres&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Qué hace: controla qué tan “arriesgado” es el modelo al elegir la siguiente palabra.&lt;/li&gt;
&lt;li&gt;Valores bajos (0.1–0.3): respuestas muy predecibles, casi siempre iguales.&lt;/li&gt;
&lt;li&gt;Valores medios (0.4–0.7): equilibrio entre creatividad y control.&lt;/li&gt;
&lt;li&gt;Valores altos (0.8–1): respuestas más creativas, pero también más “locas”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Analogía:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Es como la persona que siempre pide lo mismo en el restaurante (temperature baja) vs. la que cada vez prueba algo nuevo del menú (temperature alta).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Uso típico:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Descripciones de producto que sigan una guía de marca → temperature ~0.4–0.6.&lt;/li&gt;
&lt;li&gt;Brainstorming de ideas locas → temperature ~0.8–0.9.&lt;/li&gt;
&lt;li&gt;Generar código muy preciso → temperature ~0.1–0.3.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;top‑p y top‑k: qué tan amplio es el menú&lt;br&gt;
Estos dos controlan de qué conjunto de palabras posibles puede elegir el modelo.&lt;/p&gt;

&lt;p&gt;top‑k: “elige solo entre las k palabras más probables”.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;k pequeño → modelo muy enfocado.&lt;/li&gt;
&lt;li&gt;k grande → más diversidad.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;top‑p (o nucleus sampling): “elige entre las palabras que, juntas, suman p de probabilidad acumulada”.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;p bajo (~0.5–0.8) → más conservador.&lt;/li&gt;
&lt;li&gt;p alto (~0.9–1.0) → más diverso.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Analogía:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Si temperature es qué tan aventurero eres,&lt;br&gt;
top‑k/top‑p es qué tan grande es la carta del restaurante que estás dispuesto a mirar.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;En muchas apps con Bedrock, un buen punto de partida es:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;temperature: 0.5&lt;/li&gt;
&lt;li&gt;top‑p: 0.8&lt;/li&gt;
&lt;li&gt;top‑k: 20–50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;y luego ajustar según veas las respuestas.&lt;/p&gt;

&lt;p&gt;Longitud de respuesta y stop sequences&lt;br&gt;
Además de qué dice el modelo, puedes controlar cuánto dice y dónde se detiene.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;maxTokens / response length: límite máximo de tokens/palabras que puede generar. “Entre 50 y 100 palabras para la descripción de producto” → ajustas este valor.&lt;/li&gt;
&lt;li&gt;Stop sequences: cadenas de texto que indican “hasta aquí, gracias”. Por ejemplo, "\n\nUser:" para que pare antes de volver a mostrar el nombre del usuario.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Analogía:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Es como poner un temporizador al horno: no quieres que el pastel se siga horneando hasta quemarse.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Casos de uso:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chatbots: evitar que el modelo hable demasiado y rompa la experiencia.&lt;/li&gt;
&lt;li&gt;Integraciones con sistemas legados: cortar la salida justo donde la aplicación espera un delimitador.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Ejemplos de casos de uso en Bedrock
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Generador de descripciones de producto
Imagina que tienes un e‑commerce y quieres generar descripciones entre 50 y 100 palabras, creativas pero alineadas con la marca.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Configuración típica:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modelo: Claude o Titan (texto).&lt;/li&gt;
&lt;li&gt;temperature: 0.5 (creatividad controlada).&lt;/li&gt;
&lt;li&gt;top‑p: 0.8.&lt;/li&gt;
&lt;li&gt;maxTokens configurado para ~100 palabras.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prompt del estilo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Eres el redactor de la marca X. Escribe una descripción de 50 a 100 palabras para este producto, manteniendo un tono cercano y profesional. Producto: {{nombre}}, características: {{caracteristicas}}"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resultado: textos que no son copias entre sí, pero se sienten de la misma familia.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asistente de soporte más “serio”
Para un chatbot de soporte técnico quieres menos creatividad y más precisión.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;temperature: 0.2–0.3.&lt;/li&gt;
&lt;li&gt;top‑p: 0.5–0.7.&lt;/li&gt;
&lt;li&gt;Stop sequence para cortar al final de la respuesta.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Analogía:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Aquí el modelo es como el amigo que te ayuda a hacer la declaración de impuestos: mejor que no improvise demasiado.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Brainstorming de ideas de campaña
Si estás en fase de ideación de marketing, quieres justo lo contrario.&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;temperature: 0.8–0.9.&lt;/li&gt;
&lt;li&gt;top‑p: 0.9–1.0.&lt;/li&gt;
&lt;li&gt;maxTokens más alto para dejarlo explayarse.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prompt ejemplo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"prompt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dame 10 ideas creativas y poco convencionales para una campaña en redes sociales sobre {{tema}}. Incluye una breve explicación para cada una."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cómo empezar a jugar con estos parámetros en Bedrock
&lt;/h2&gt;

&lt;p&gt;Mi recomendación práctica:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Abre el playground de Amazon Bedrock en la consola.&lt;/li&gt;
&lt;li&gt;Elige un modelo de texto (por ejemplo Claude).&lt;/li&gt;
&lt;li&gt;Escribe un mismo prompt y prueba combinaciones:&lt;/li&gt;
&lt;li&gt;temperature 0.1, 0.5, 0.9.&lt;/li&gt;
&lt;li&gt;top‑p 0.6 vs 0.9.&lt;/li&gt;
&lt;li&gt;Cambia maxTokens y mira cómo se recortan las respuestas.&lt;/li&gt;
&lt;li&gt;Observa cómo cambian el tono, la diversidad y la precisión.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hazlo como si ajustaras el brillo, contraste y saturación de una foto: no necesitas entender todas las ecuaciones detrás, pero sí qué sensación te produce cada cambio.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cierre: no es magia, son perillas
&lt;/h2&gt;

&lt;p&gt;Lo bonito de trabajar con modelos fundacionales en Amazon Bedrock es que no tienes que ser investigador en IA para sacarle partido.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Los modelos ya vienen entrenados.&lt;/li&gt;
&lt;li&gt;Bedrock te quita de encima el dolor de cabeza de la infraestructura.&lt;/li&gt;
&lt;li&gt;Los parámetros de inferencia son tus perillas para adaptar la IA a tu caso de uso concreto.
Si piensas en ellos como en el ecualizador de tu playlist favorita, el resto es práctica: prueba, escucha, ajusta, repite.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Si te interesa, en otro post puedo entrar en temas un poco más avanzados: cómo combinar estos parámetros con prompt engineering, RAG (recuperación aumentada) y guardrails para construir sistemas que no solo sean creativos, sino también seguros y alineados con tu negocio.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>programming</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Introducción a Grafana</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Wed, 15 Oct 2025 16:32:27 +0000</pubDate>
      <link>https://dev.to/jjoc007/introduccion-a-grafana-5a4g</link>
      <guid>https://dev.to/jjoc007/introduccion-a-grafana-5a4g</guid>
      <description>&lt;p&gt;Una guía completa para principiantes sobre la instalación e inicio con Grafana en macOS.&lt;/p&gt;




&lt;h2&gt;
  
  
  1️⃣ Introducción
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ¿Qué es Grafana?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Grafana&lt;/strong&gt; es una plataforma de código abierto para monitoreo y observabilidad que te permite consultar, visualizar, alertar y comprender tus métricas sin importar dónde estén almacenadas. Proporciona una manera poderosa y elegante de crear, explorar y compartir dashboards con tu equipo y fomentar una cultura basada en datos.&lt;/p&gt;

&lt;h3&gt;
  
  
  ¿Para qué se usa Grafana?
&lt;/h3&gt;

&lt;p&gt;Grafana se utiliza comúnmente para:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoreo de infraestructura&lt;/strong&gt;: Rastrear CPU, memoria, uso de disco y métricas de red&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoreo del rendimiento de aplicaciones (APM)&lt;/strong&gt;: Supervisar la salud y el rendimiento de las aplicaciones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Análisis de negocio&lt;/strong&gt;: Visualizar KPIs y métricas empresariales&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visualización de datos IoT&lt;/strong&gt;: Mostrar datos de sensores y métricas de dispositivos&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Análisis de logs&lt;/strong&gt;: Agregar y analizar datos de registros de múltiples fuentes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Arquitectura y Componentes Principales
&lt;/h3&gt;

&lt;p&gt;La arquitectura de Grafana consiste en varios componentes clave:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Fuentes de Datos (Data Sources)&lt;/strong&gt;: Conexiones a bases de datos de series temporales y otros almacenes de datos (Prometheus, InfluxDB, Elasticsearch, MySQL, PostgreSQL, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboards&lt;/strong&gt;: Colecciones de paneles que muestran visualizaciones de tus datos&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Paneles (Panels)&lt;/strong&gt;: Componentes de visualización individuales (gráficos, tablas, mapas de calor, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consultas (Queries)&lt;/strong&gt;: Solicitudes de datos enviadas a tus fuentes de datos configuradas&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alertas (Alerting)&lt;/strong&gt;: Sistema de notificaciones basado en reglas para alertarte cuando las métricas alcanzan ciertos umbrales&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usuarios y Equipos&lt;/strong&gt;: Control de acceso basado en roles para gestionar permisos&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugins&lt;/strong&gt;: Arquitectura extensible que soporta fuentes de datos, paneles y aplicaciones personalizadas&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  2️⃣ Instalación en macOS
&lt;/h2&gt;

&lt;p&gt;Esta guía utiliza &lt;strong&gt;Homebrew&lt;/strong&gt;, el popular gestor de paquetes para macOS. Si no tienes Homebrew instalado, visita &lt;a href="https://brew.sh" rel="noopener noreferrer"&gt;brew.sh&lt;/a&gt; para obtener instrucciones de instalación.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requisitos Previos
&lt;/h3&gt;

&lt;p&gt;Asegúrate de que Homebrew esté instalado en tu sistema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberías ver una salida similar a: &lt;code&gt;Homebrew 4.x.x&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pasos de Instalación
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Paso 1: Instalar Grafana
&lt;/h4&gt;

&lt;p&gt;Abre tu terminal y ejecuta:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Salida esperada:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;==&amp;gt; Downloading grafana...
==&amp;gt; Pouring grafana--12.2.0.arm64_sequoia.bottle.1.tar.gz
🍺  /opt/homebrew/Cellar/grafana/12.2.0: 10,910 files, 625.8MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Consejo:&lt;/strong&gt; La instalación incluye todas las dependencias necesarias y tomará algunos minutos dependiendo de tu conexión a internet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Paso 2: Verificar la Instalación
&lt;/h4&gt;

&lt;p&gt;Comprueba que Grafana se instaló correctamente:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew list | &lt;span class="nb"&gt;grep &lt;/span&gt;grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberías ver &lt;code&gt;grafana&lt;/code&gt; en la salida.&lt;/p&gt;

&lt;h4&gt;
  
  
  Paso 3: Iniciar Grafana
&lt;/h4&gt;

&lt;p&gt;Tienes dos opciones para iniciar Grafana:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Opción A: Usando Servicios de Homebrew (recomendado para uso persistente)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew services start grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esto iniciará Grafana como un servicio en segundo plano que se lanza automáticamente al iniciar sesión.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Salida esperada:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;==&amp;gt; Successfully started `grafana` (label: homebrew.mxcl.grafana)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Opción B: Ejecutar Grafana directamente (para uso temporal)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;grafana server &lt;span class="nt"&gt;--config&lt;/span&gt; /opt/homebrew/etc/grafana/grafana.ini &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--homepath&lt;/span&gt; /opt/homebrew/opt/grafana/share/grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Nota:&lt;/strong&gt; Si el puerto 3000 ya está en uso, puedes especificar un puerto diferente:&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;grafana server &lt;span class="nt"&gt;--config&lt;/span&gt; /opt/homebrew/etc/grafana/grafana.ini &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--homepath&lt;/span&gt; /opt/homebrew/opt/grafana/share/grafana &lt;span class="se"&gt;\&lt;/span&gt;
  cfg:default.server.http_port&lt;span class="o"&gt;=&lt;/span&gt;3001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Paso 4: Verificar que Grafana Está Ejecutándose
&lt;/h4&gt;

&lt;p&gt;Verifica el estado del servicio:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew services list | &lt;span class="nb"&gt;grep &lt;/span&gt;grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O verifica si Grafana está respondiendo en el puerto predeterminado:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deberías ver una respuesta HTTP con estado &lt;code&gt;302&lt;/code&gt; (redirección a la página de inicio de sesión).&lt;/p&gt;

&lt;h4&gt;
  
  
  Paso 5: Acceder a la Interfaz Web de Grafana
&lt;/h4&gt;

&lt;p&gt;Abre tu navegador web y navega a:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O si cambiaste el puerto:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://localhost:3001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;¡Deberías ver la página de inicio de sesión de Grafana!&lt;/p&gt;

&lt;h3&gt;
  
  
  Detener Grafana
&lt;/h3&gt;

&lt;p&gt;Para detener el servicio de Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew services stop grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Gestionar el Servicio de Grafana
&lt;/h3&gt;

&lt;p&gt;Ver todos los servicios de Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew services list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Reiniciar Grafana:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew services restart grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  3️⃣ Primera Vista de Grafana
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Inicio de Sesión Inicial
&lt;/h3&gt;

&lt;p&gt;Cuando accedas por primera vez a Grafana, verás la pantalla de inicio de sesión.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credenciales predeterminadas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Usuario (Username):&lt;/strong&gt; &lt;code&gt;admin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contraseña (Password):&lt;/strong&gt; &lt;code&gt;admin&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ingresa estas credenciales y haz clic en &lt;strong&gt;Log In&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iyng7k9dbkkhd5v7i2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0iyng7k9dbkkhd5v7i2y.png" alt="Login" width="499" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Solicitud de Cambio de Contraseña
&lt;/h3&gt;

&lt;p&gt;En el primer inicio de sesión, Grafana te solicitará inmediatamente que cambies la contraseña predeterminada por razones de seguridad.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Puedes:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ingresar una nueva contraseña segura y hacer clic en &lt;strong&gt;Submit&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Hacer clic en &lt;strong&gt;Skip&lt;/strong&gt; para cambiarlo más tarde (no recomendado para uso en producción)&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;Advertencia de Seguridad:&lt;/strong&gt; Siempre cambia la contraseña predeterminada en entornos de producción para prevenir accesos no autorizados.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  ¡Bienvenido a Grafana!
&lt;/h3&gt;

&lt;p&gt;Después de iniciar sesión, verás el &lt;strong&gt;Dashboard de Inicio&lt;/strong&gt; (página de bienvenida).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdirwb2wai51pt40umm7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdirwb2wai51pt40umm7c.png" alt="Dashboard" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Comprendiendo la Interfaz de Usuario
&lt;/h3&gt;

&lt;p&gt;La interfaz de Grafana consiste en varias áreas clave:&lt;/p&gt;

&lt;h4&gt;
  
  
  Barra Lateral Izquierda (Navegación Principal)
&lt;/h4&gt;

&lt;p&gt;La barra lateral izquierda plegable contiene el menú de navegación principal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🏠 Home&lt;/strong&gt;: Volver al dashboard de inicio&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🔍 Explore&lt;/strong&gt;: Exploración de datos ad-hoc sin crear un dashboard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚠️ Alerting&lt;/strong&gt;: Configurar y gestionar reglas de alerta y notificaciones&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📊 Dashboards&lt;/strong&gt;: Explorar, crear y gestionar dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🔌 Connections&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data sources&lt;/strong&gt;: Configurar conexiones a tus bases de datos y servicios&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugins&lt;/strong&gt;: Explorar e instalar plugins adicionales&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;⚙️ Administration&lt;/strong&gt; (solo admin):

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Users&lt;/strong&gt;: Gestionar cuentas de usuario&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teams&lt;/strong&gt;: Organizar usuarios en equipos&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugins&lt;/strong&gt;: Gestionar plugins instalados&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Settings&lt;/strong&gt;: Configuración global de Grafana&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Keys&lt;/strong&gt;: Generar tokens API para acceso programático&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8q7ihk6zsbcel5b8yy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5v8q7ihk6zsbcel5b8yy.png" alt="Opciones" width="279" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Barra de Navegación Superior
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nombre de la organización&lt;/strong&gt;: Haz clic para cambiar de organización (si tienes múltiples)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Buscar (Search)&lt;/strong&gt;: Encuentra dashboards rápidamente&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crear (Create)&lt;/strong&gt; (+): Crear nuevo dashboard, carpeta o alerta&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ayuda (Help)&lt;/strong&gt; (?): Acceder a documentación y soporte&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Perfil (Profile)&lt;/strong&gt;: Tu configuración y preferencias de usuario&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Área de Contenido Principal
&lt;/h4&gt;

&lt;p&gt;Aquí es donde se muestran los dashboards, paneles y páginas de configuración.&lt;/p&gt;

&lt;h3&gt;
  
  
  Explorando Fuentes de Datos
&lt;/h3&gt;

&lt;p&gt;Una de las primeras cosas que querrás hacer es configurar una fuente de datos.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Haz clic en &lt;strong&gt;Connections&lt;/strong&gt; → &lt;strong&gt;Data sources&lt;/strong&gt; en la barra lateral izquierda&lt;/li&gt;
&lt;li&gt;Verás la página de Fuentes de Datos&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iq7s8nrg8o7hlofzesc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iq7s8nrg8o7hlofzesc.png" alt="Add Datasource" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Haz clic en &lt;strong&gt;Add data source&lt;/strong&gt; para ver las opciones disponibles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus&lt;/li&gt;
&lt;li&gt;Graphite&lt;/li&gt;
&lt;li&gt;InfluxDB&lt;/li&gt;
&lt;li&gt;MySQL, PostgreSQL&lt;/li&gt;
&lt;li&gt;Elasticsearch&lt;/li&gt;
&lt;li&gt;CloudWatch&lt;/li&gt;
&lt;li&gt;Y muchas más...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4by6wygo285t6l2pyvvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4by6wygo285t6l2pyvvp.png" alt="Opcines" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;Consejo:&lt;/strong&gt; Para tu primera experiencia, puedes usar la fuente de datos integrada &lt;strong&gt;TestData DB&lt;/strong&gt; de Grafana, que genera datos de muestra para probar visualizaciones.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yfr6vxnlyj7y8a90246.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yfr6vxnlyj7y8a90246.png" alt="Test Data" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creando Tu Primer Dashboard
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Haz clic en el ícono &lt;strong&gt;+&lt;/strong&gt; en la navegación superior o selecciona &lt;strong&gt;Dashboards&lt;/strong&gt; desde la barra lateral&lt;/li&gt;
&lt;li&gt;Haz clic en &lt;strong&gt;Create Dashboard&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Verás un dashboard vacío con una opción para &lt;strong&gt;Add visualization&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz3m8vrzzyw7l3fk8l4n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqz3m8vrzzyw7l3fk8l4n.png" alt="create dash" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3vu8mfnn86fyvxqtx20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3vu8mfnn86fyvxqtx20.png" alt="Dash created" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;¡Aquí es donde construirás tus primeros paneles de dashboard!&lt;/p&gt;




&lt;h2&gt;
  
  
  4️⃣ Conceptos Clave Explicados
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dashboards
&lt;/h3&gt;

&lt;p&gt;Un dashboard es una colección de paneles organizados en un diseño de cuadrícula. Los dashboards son:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Compartibles con miembros del equipo&lt;/li&gt;
&lt;li&gt;Exportables como JSON&lt;/li&gt;
&lt;li&gt;Controlados por rango de tiempo (puedes ajustar el período de tiempo para todos los paneles)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Paneles (Panels)
&lt;/h3&gt;

&lt;p&gt;Los paneles son los bloques de construcción de los dashboards. Cada panel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Muestra datos de una o más consultas&lt;/li&gt;
&lt;li&gt;Puede personalizarse con varios tipos de visualización (gráficos, tablas, estadísticas, etc.)&lt;/li&gt;
&lt;li&gt;Tiene su propia configuración de apariencia y comportamiento&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Fuentes de Datos (Data Sources)
&lt;/h3&gt;

&lt;p&gt;Las fuentes de datos son los backends donde se almacenan tus métricas, logs u otros datos. Grafana soporta docenas de fuentes de datos desde el inicio.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consultas (Queries)
&lt;/h3&gt;

&lt;p&gt;Las consultas definen qué datos recuperar de tus fuentes de datos. El lenguaje de consulta depende del tipo de fuente de datos (PromQL para Prometheus, SQL para bases de datos, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  Rango de Tiempo (Time Range)
&lt;/h3&gt;

&lt;p&gt;Los dashboards de Grafana están centrados en el tiempo. El selector de rango de tiempo en la esquina superior derecha controla qué período de tiempo estás visualizando en todos los paneles.&lt;/p&gt;




&lt;h2&gt;
  
  
  5️⃣ Conclusión
&lt;/h2&gt;

&lt;p&gt;¡Felicitaciones! Has completado exitosamente:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Instalación de Grafana en macOS usando Homebrew&lt;/li&gt;
&lt;li&gt;✅ Inicio del servicio de Grafana&lt;/li&gt;
&lt;li&gt;✅ Acceso a la interfaz web de Grafana&lt;/li&gt;
&lt;li&gt;✅ Inicio de sesión y cambio de la contraseña predeterminada&lt;/li&gt;
&lt;li&gt;✅ Exploración de la interfaz de usuario y componentes clave de Grafana&lt;/li&gt;
&lt;li&gt;✅ Aprendizaje sobre fuentes de datos y dashboards&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Próximos Pasos
&lt;/h3&gt;

&lt;p&gt;Ahora que tienes Grafana funcionando, aquí hay algunos pasos recomendados:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Agregar una Fuente de Datos&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configura una conexión a tu base de datos de métricas&lt;/li&gt;
&lt;li&gt;O usa la TestData DB integrada para practicar&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Crear Tu Primer Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agrega paneles con diferentes tipos de visualización&lt;/li&gt;
&lt;li&gt;Experimenta con consultas y transformaciones&lt;/li&gt;
&lt;li&gt;Personaliza la apariencia de los paneles&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Explorar Dashboards de Muestra&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Importa dashboards pre-construidos desde &lt;a href="https://grafana.com/grafana/dashboards/" rel="noopener noreferrer"&gt;grafana.com/dashboards&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Aprende de ejemplos de la comunidad&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configurar Alertas&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configura reglas de alerta para monitorear métricas importantes&lt;/li&gt;
&lt;li&gt;Establece canales de notificación (email, Slack, etc.)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Aprender Lenguajes de Consulta&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Estudia el lenguaje de consulta para tu fuente de datos&lt;/li&gt;
&lt;li&gt;Practica la construcción de consultas complejas&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Recursos Adicionales
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Documentación Oficial&lt;/strong&gt;: &lt;a href="https://grafana.com/docs/grafana/latest/" rel="noopener noreferrer"&gt;grafana.com/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foros de la Comunidad&lt;/strong&gt;: &lt;a href="https://community.grafana.com/" rel="noopener noreferrer"&gt;community.grafana.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tutoriales en YouTube&lt;/strong&gt;: Busca "Grafana tutoriales" para guías en video&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repositorio de Dashboards&lt;/strong&gt;: &lt;a href="https://grafana.com/grafana/dashboards/" rel="noopener noreferrer"&gt;grafana.com/dashboards&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Referencia de Comandos Comunes
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Iniciar Grafana&lt;/span&gt;
brew services start grafana

&lt;span class="c"&gt;# Detener Grafana&lt;/span&gt;
brew services stop grafana

&lt;span class="c"&gt;# Reiniciar Grafana&lt;/span&gt;
brew services restart grafana

&lt;span class="c"&gt;# Verificar estado del servicio&lt;/span&gt;
brew services list | &lt;span class="nb"&gt;grep &lt;/span&gt;grafana

&lt;span class="c"&gt;# Ver logs de Grafana&lt;/span&gt;
&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /opt/homebrew/var/log/grafana/grafana.log

&lt;span class="c"&gt;# Actualizar Grafana&lt;/span&gt;
brew upgrade grafana
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;¡Feliz monitoreo con Grafana!&lt;/strong&gt; 🎉&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>monitoring</category>
      <category>observability</category>
      <category>programming</category>
    </item>
    <item>
      <title>Practice AWS Certification Question: AWS Solutions Architect Professional — Lambda — ECR</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Sun, 03 Nov 2024 15:24:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/practice-aws-certification-question-aws-solutions-architect-professional-lambda-ecr-2717</link>
      <guid>https://dev.to/aws-builders/practice-aws-certification-question-aws-solutions-architect-professional-lambda-ecr-2717</guid>
      <description>&lt;p&gt;When studying for an AWS certification, we often encounter questions that require a deep understanding of services we might not use every day. Understanding these services only at a theoretical level can lead to confusion and mistakes during the exam. So, how can we improve our comprehension and retention of these complex topics?&lt;/p&gt;

&lt;p&gt;One of the best ways to tackle this challenge is by practicing directly with AWS services, replicating question scenarios in a real environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services Associated with the Question&lt;/strong&gt;: Lambda and ECR&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Domain&lt;/strong&gt;: Designing New Solutions&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Question&lt;/strong&gt;: Your team is developing a new Lambda function for a microservice component. You need to package and deploy the Lambda function as a container image. The container image must be based on the &lt;code&gt;python:buster&lt;/code&gt; image with other dependencies and libraries installed. To use the container image correctly for the Lambda function, which of the following actions is necessary?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer&lt;/strong&gt;: Install the runtime interface client in the container image to make it compatible with Lambda.&lt;/p&gt;

&lt;h3&gt;
  
  
  Related Services
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: Lambda is an AWS service that allows you to run code without provisioning or managing servers. This service supports multiple programming languages and, more recently, allows the use of container images as execution environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Relevant Features&lt;/strong&gt;: When using Lambda with container images, it is essential to include a "runtime interface client" in the image. This client is an API within the container that enables Lambda to interact correctly with the runtime environment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic Container Registry (ECR)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Description&lt;/strong&gt;: ECR is a fully managed container registry service that simplifies the storage, management, and deployment of Docker container images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Relevant Features&lt;/strong&gt;: Although it is not necessary to install an ECR agent in the container for this question, ECR is still relevant for storing and managing the container images that will be run on Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Exercise to Demonstrate the Correct Answer
&lt;/h3&gt;

&lt;p&gt;To test this configuration, you can follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a Dockerfile with the installation of the runtime interface client.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:buster

RUN pip install awslambdaric

WORKDIR /var/task

COPY app.py /var/task/

ENTRYPOINT ["python3", "-m", "awslambdaric"]

CMD ["app.handler"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a basic handler for the Lambda execution.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json

def handler(event, context):
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Build the Docker image locally:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t lambda-container-demo .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitpc1b82qcwhia66quva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitpc1b82qcwhia66quva.png" alt="Image description" width="720" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Authenticate to the AWS account using environment variables:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID=mi-key
export AWS_SECRET_ACCESS_KEY=mi-secret
export AWS_DEFAULT_REGION=us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create the repository in ECR to upload our image:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name lambda-container-demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Verify that it has been created in the ECR service from the console:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqqedc7cwypm2vb5bxwo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqqedc7cwypm2vb5bxwo.png" alt="Image description" width="401" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the following commands to authenticate and upload the container:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Inicia sesión en ECR, crea un repositorio y sube la imagen
aws ecr create-repository --repository-name lambda-container-demo
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin &amp;lt;your_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com
docker tag lambda-container-demo:latest &amp;lt;your_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/lambda-container-demo:latest
docker push &amp;lt;your_account_id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/lambda-container-demo:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Lambda function based on this ECR image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fpzt13sjtbt800lxmb2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fpzt13sjtbt800lxmb2.png" alt="1" width="720" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the previously created image.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx12a8exmad745bitipu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx12a8exmad745bitipu.png" alt="2" width="720" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Test the Lambda function to verify the expected behavior.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tiig00cvrfshb1ihocp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0tiig00cvrfshb1ihocp.png" alt="3" width="720" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This concludes the practice for answering this AWS Architect Professional certification question.&lt;/p&gt;

&lt;p&gt;Keep in mind that if you've never used ECR or Lambda with Docker images, in this practice you covered basic concepts that, by practicing, can help you retain them long-term.&lt;/p&gt;




&lt;p&gt;If you've enjoyed this article, feel free to give a 👏.&lt;br&gt;
🤔 &lt;strong&gt;Follow me on social media!&lt;/strong&gt; ⏬&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;a href="https://www.youtube.com/jjoc007" rel="noopener noreferrer"&gt;https://www.youtube.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter&lt;/strong&gt;: &lt;a href="https://twitter.com/jjoc007" rel="noopener noreferrer"&gt;https://twitter.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007" rel="noopener noreferrer"&gt;https://github.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt;: &lt;a href="https://jjoc007.com" rel="noopener noreferrer"&gt;https://jjoc007.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/jjoc007/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/jjoc007/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazon</category>
      <category>professional</category>
      <category>certification</category>
    </item>
    <item>
      <title>Analysis of AWS Certification Question: AWS Solutions Architect Professional — VPC Flow Logs — Kinesis Data Firehose</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Sun, 03 Nov 2024 15:13:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/analysis-of-aws-certification-question-aws-solutions-architect-professional-vpc-flow-logs-kinesis-data-firehose-5ene</link>
      <guid>https://dev.to/aws-builders/analysis-of-aws-certification-question-aws-solutions-architect-professional-vpc-flow-logs-kinesis-data-firehose-5ene</guid>
      <description>&lt;p&gt;When studying for an AWS certification, we often encounter questions that require a deep understanding of services we might not use every day. Understanding these services only at a theoretical level can lead to confusion and mistakes during the exam. So, how can we improve our comprehension and retention of these complex topics?&lt;/p&gt;

&lt;p&gt;One of the best ways to tackle this challenge is by practicing directly with AWS services, replicating question scenarios in a real environment. In this article, I will guide you through the analysis of a specific question about VPC Flow Logs and Kinesis Data Firehose, breaking down each component and showing how you can build a similar practice in your AWS account. This hands-on approach not only reinforces key concepts but also provides real insight into how these services work in practice, helping us turn theory into applicable knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Domain: Design Solutions for Organizational Complexity
&lt;/h3&gt;

&lt;p&gt;An engineering firm has deployed a critical application on the web servers of an Amazon EC2 instance launched in a VPC. The operations team is looking for a detailed analysis of the traffic from these web servers. They have enabled VPC Flow Logs on the VPC. The logs need to be analyzed using open-source tools in near real-time and visualized to create dashboards.&lt;/p&gt;

&lt;h4&gt;
  
  
  Proposed Solution:
&lt;/h4&gt;

&lt;p&gt;Ingest the VPC Flow Logs into Amazon Kinesis Data Firehose, which will deliver these logs to Amazon OpenSearch Service for near real-time analysis and visualization of the logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problem Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2lac4mohc7a23cxjnyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2lac4mohc7a23cxjnyr.png" alt="1" width="621" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Involved Services
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;VPC Flow Logs&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
Captures and stores information about the network traffic entering and leaving network interfaces in your Amazon VPC. VPC Flow Logs allow recording details about network connections for analysis and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Kinesis Data Firehose&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
A real-time data ingestion service that facilitates loading large volumes of data into storage and analysis services. In this case, Kinesis Data Firehose acts as the intermediary that collects the VPC Flow Logs and sends them to Amazon OpenSearch Service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon OpenSearch Service&lt;/strong&gt;:&lt;br&gt;&lt;br&gt;
An AWS-managed platform for search, analysis, and real-time data visualization. It is commonly used to work with logs and telemetry data. OpenSearch Service allows storing and analyzing VPC Flow Logs and creating dashboards to visualize them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps to Replicate the Exercise:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;: Configure VPC Flow Logs, send the logs to Kinesis Data Firehose, and visualize the data in Amazon OpenSearch Service.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create an OpenSearch Service domain to store and analyze the VPC Flow Logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a Firehose delivery stream to send the flow logs to the OpenSearch Service domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a VPC Flow Log subscription to the delivery stream.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Explore the VPC Flow Logs on the OpenSearch Service dashboards.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a role mapping with an OpenSearch Service user to the Kinesis Data Firehose service role. Since we are using a public access domain for OpenSearch Service, we need to assign the IAM role of the delivery stream to the OpenSearch Service principal user to send bulk logs to the OpenSearch Service domain.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an index pattern in the OpenSearch Service dashboards to enable analysis and visualization of the VPC logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;As a prerequisite, you need to create an Amazon Simple Storage Service (Amazon S3) bucket to store Firehose delivery stream backups and failed logs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;⚠️ 💰 🤑 &lt;strong&gt;Before proceeding, be cautious of the potential costs that may be incurred by executing these steps and remember to delete the resources after completing the exercise&lt;/strong&gt; ⚠️ 💰 🤑&lt;/p&gt;

&lt;h3&gt;
  
  
  Step-by-Step Execution of the Tutorial:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Creating an S3 Bucket to Store Backups of Kinesis Data Firehose Messages:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk066rsva9qtwbho153p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk066rsva9qtwbho153p6.png" alt="2" width="631" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the OpenSearch domain:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipme7cshkuixdxl5pgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipme7cshkuixdxl5pgc.png" alt="3" width="720" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12opizjflergv4qaaekb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12opizjflergv4qaaekb.png" alt="4" width="720" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhmlv7nlyxilau31pw56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhmlv7nlyxilau31pw56.png" alt="5" width="720" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F833j8veoza568d7mu0d7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F833j8veoza568d7mu0d7.png" alt="6" width="720" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3g1r9jtqgxmqx2uf35g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3g1r9jtqgxmqx2uf35g.png" alt="7" width="720" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnl4qudxrwrk64flgj2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnl4qudxrwrk64flgj2b.png" alt="8" width="720" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a few minutes, we will see the created domain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmnw2ahhi3mmouij43ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxmnw2ahhi3mmouij43ui.png" alt="9" width="720" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Kinesis Data Firehose:
&lt;/h3&gt;

&lt;p&gt;Go to the Amazon Kinesis Data Firehose console and create a new Firehose stream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxie3cgz2qoq083e9zsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxie3cgz2qoq083e9zsg.png" alt="11" width="720" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3v0r1b2mullwkr7eon3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3v0r1b2mullwkr7eon3.png" alt="12" width="720" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkusdqs72zvtx5gowye9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkusdqs72zvtx5gowye9e.png" alt="13" width="720" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Select the domain created previously:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewu2tub610s3wkqvi9vh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fewu2tub610s3wkqvi9vh.png" alt="14" width="720" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3zss1m1wlgi37c6qfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faj3zss1m1wlgi37c6qfl.png" alt="15" width="720" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgazz3hs88oltk4qjzxlm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgazz3hs88oltk4qjzxlm.png" alt="16" width="720" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create an S3 bucket to store backups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xt0xsp6tk4p7a0o5i33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xt0xsp6tk4p7a0o5i33.png" alt="17" width="720" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xfzlot2bdbglwasuwa8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xfzlot2bdbglwasuwa8.png" alt="18" width="720" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating VPC Flow Logs:
&lt;/h3&gt;

&lt;p&gt;Go to the VPC service and select the VPC to which you want to add the configuration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5c6cewz3915vsqfeivc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5c6cewz3915vsqfeivc.png" alt="19" width="720" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a flow log:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d7m7tgxqzhuh6ob6b7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3d7m7tgxqzhuh6ob6b7b.png" alt="21" width="720" height="102"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1o78ypbkn8cto64cb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm1o78ypbkn8cto64cb8.png" alt="22" width="720" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the Firehose stream that we created previously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq086zbj9lbbfe8z7xe7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq086zbj9lbbfe8z7xe7m.png" alt="23" width="720" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsolk6xx1njw70qfkazv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsolk6xx1njw70qfkazv9.png" alt="24" width="720" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to the OpenSearch dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67x51fd0osyh8zibxi4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F67x51fd0osyh8zibxi4s.png" alt="25" width="448" height="115"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hdcq07e2ka78x5exsdw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hdcq07e2ka78x5exsdw.png" alt="26" width="340" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since we are using a public access domain for OpenSearch Service, you need to assign the role created for the Firehose delivery stream to the OpenSearch Service dashboard user so that the delivery stream can send bulk logs to the OpenSearch Service domain.&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;Security &amp;gt; Roles&lt;/strong&gt; and select the &lt;strong&gt;all_access&lt;/strong&gt; role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvww87c1q2d72bknk1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgkvww87c1q2d72bknk1o.png" alt="27" width="720" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxulijvrxq0wttlxwdoxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxulijvrxq0wttlxwdoxl.png" alt="28" width="720" height="98"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copy the ARN of the role generated by Kinesis Firehose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5wyppxzje8v0xgjikg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5wyppxzje8v0xgjikg1.png" alt="29" width="613" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Return to Home, then go to &lt;strong&gt;Manage&lt;/strong&gt; and select &lt;strong&gt;Index Patterns&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyb5vw3b9705mma0ptvb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyb5vw3b9705mma0ptvb.png" alt="30" width="239" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create one with the prefix &lt;code&gt;vpcflowlogs*&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd8e2he93bfnhnqbhu2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyd8e2he93bfnhnqbhu2o.png" alt="31" width="720" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9k6wxte0fjs8oamip58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9k6wxte0fjs8oamip58.png" alt="32" width="332" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, we will be able to see the logs coming from the &lt;code&gt;vpcflowlogs&lt;/code&gt; index.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgu0xsoxxgzr331fn6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cgu0xsoxxgzr331fn6i.png" alt="33" width="720" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also go to the Kinesis Data Firehose metrics to see the flow of messages between VPC and OpenSearch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41s2vda13tc6hzkm8bda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F41s2vda13tc6hzkm8bda.png" alt="34" width="720" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This concludes the practice for answering this AWS Architect Professional certification question.&lt;/p&gt;

&lt;p&gt;Keep in mind that if you've never used OpenSearch or Kinesis, in this practice you covered basic concepts that, through hands-on experience, can help you retain them long-term.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Question obtained from the simulator provided by &lt;a href="https://www.whizlabs.com/" rel="noopener noreferrer"&gt;https://www.whizlabs.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/big-data/stream-vpc-flow-logs-to-amazon-opensearch-service-via-amazon-kinesis-data-firehose/" rel="noopener noreferrer"&gt;Stream VPC flow logs to Amazon OpenSearch Service via Amazon Kinesis Data Firehose&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you've enjoyed this article, feel free to give a 👏 and ⭐ to the repository.&lt;/p&gt;

&lt;p&gt;🤔 &lt;strong&gt;Follow me on social media!&lt;/strong&gt; ⏬&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;a href="https://www.youtube.com/jjoc007" rel="noopener noreferrer"&gt;https://www.youtube.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter&lt;/strong&gt;: &lt;a href="https://twitter.com/jjoc007" rel="noopener noreferrer"&gt;https://twitter.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007" rel="noopener noreferrer"&gt;https://github.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt;: &lt;a href="https://jjoc007.com" rel="noopener noreferrer"&gt;https://jjoc007.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/jjoc007/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/jjoc007/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>professional</category>
      <category>amazon</category>
      <category>certification</category>
    </item>
    <item>
      <title>Analysis of AWS Solutions Architect Professional Certification Question — EC2 Image Builder and Resource Access Manager</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Sun, 27 Oct 2024 04:33:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/analysis-of-aws-solutions-architect-professional-certification-question-ec2-image-builder-and-resource-access-manager-3c5b</link>
      <guid>https://dev.to/aws-builders/analysis-of-aws-solutions-architect-professional-certification-question-ec2-image-builder-and-resource-access-manager-3c5b</guid>
      <description>&lt;p&gt;When studying for an AWS certification, we often encounter questions that require in-depth knowledge of services we may not use daily. Understanding these services at a purely theoretical level can lead to confusion and mistakes during the exam. So, how can we improve our understanding and retention of these complex topics?&lt;/p&gt;

&lt;p&gt;One of the best ways to tackle this challenge is to practice directly with AWS services, replicating question scenarios in a real environment. In this article, I will guide you through analyzing a specific question on EC2 Image Builder and AWS Resource Access Manager (RAM), breaking down each component and showing how you can set up a similar practice in your AWS account—even if you don't have access to an AWS Organization. This practical approach not only reinforces key concepts but also provides a real view of how these services function in practice, helping us turn theory into applicable knowledge.&lt;/p&gt;




&lt;h2&gt;
  
  
  Domain: Design for New Solutions
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Question:&lt;/strong&gt; You are assisting a team in creating multiple AMIs and Docker images through EC2 Image Builder pipelines. Other teams want to use the same EC2 Image Builder resources, including components, recipes, and images, in their image pipelines. You need to find an appropriate approach to share resources with other organizational units within the AWS Organization or specific AWS accounts. Which of the following methods is suitable?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Answer:&lt;/strong&gt; In AWS Resource Access Manager (RAM), add the shared components, images, or recipes to shared resources and configure the principals that are permitted to access the shared resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Analysis, Practice, and Demonstration of the Answer
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Services Involved
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AWS EC2 Image Builder
&lt;/h4&gt;

&lt;p&gt;This AWS service allows you to automate the creation and management of system images (such as EC2 AMIs or Docker images). You can define "pipelines" that build images according to specifications (components, recipes, tests, etc.) and update them automatically as needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  AWS Resource Access Manager (RAM)
&lt;/h4&gt;

&lt;p&gt;RAM is an AWS service that enables sharing resources across accounts within an AWS Organization without needing to duplicate them. You can use RAM to share EC2 Image Builder components, AMIs, subnets, VPCs, and more with other accounts in your organization or specific external accounts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Related Concepts
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Build Component (AWS EC2 Image Builder)
&lt;/h4&gt;

&lt;p&gt;In EC2 Image Builder, Build Components are scripts or command sequences that define custom configurations and installation steps for an image. These components allow for automating the installation, configuration, and validation of software and settings in the final image you are building.&lt;/p&gt;

&lt;h5&gt;
  
  
  Role of Build Components
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt;: You can use components to add specific software or make custom configurations in the image, such as installing web servers, databases, monitoring agents, or any other necessary software.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Components automate complex configurations. Instead of configuring each instance manually after the image is created, you can have the final image already include everything needed, reducing errors and saving time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modularity&lt;/strong&gt;: Components are modular, meaning you can create a component once and reuse it across multiple recipes or pipelines. This is useful for maintaining consistent configurations across multiple images.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Examples of Common Components
&lt;/h4&gt;

&lt;p&gt;Some examples of components you might find or create include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installing a web server (e.g., Apache or NGINX).&lt;/li&gt;
&lt;li&gt;Installing and configuring monitoring agents (such as CloudWatch or Datadog).&lt;/li&gt;
&lt;li&gt;Adding extra security configurations or installing development tool packages.&lt;/li&gt;
&lt;li&gt;Operating system configuration tweaks, such as modifying network settings or kernel parameters.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Summary of Steps to Replicate the Exercise:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;: Create an image with EC2 Image Builder and share it with another account using RAM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Set Up EC2 Image Builder
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Access the EC2 Image Builder console in AWS.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a New Image Pipeline&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to “Image pipelines” and select “Create Image Pipeline.”&lt;/li&gt;
&lt;li&gt;Assign a name to the pipeline (e.g., &lt;code&gt;MyCustomImagePipeline&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;In “Recipe,” select or create a recipe that defines the operating system and any additional components you want in the image.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Define the Recipe&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If creating a new recipe, select a base image (e.g., Amazon Linux 2).&lt;/li&gt;
&lt;li&gt;Add components (e.g., system updates, specific software installation).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Tests (Optional)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add tests to ensure the created image meets certain requirements.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Define the Distribution Policy (Optional)&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decide in which regions the image will be available.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure the Pipeline and Save&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Build the Image
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Once the pipeline is configured, start a manual build to generate the first image or wait for the pipeline to execute an automatic build based on the defined schedule.&lt;/li&gt;
&lt;li&gt;Check the progress in “Image Pipeline” and wait for the image to be ready.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 3: Share the Image with AWS RAM
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Access the Resource Access Manager (RAM) console.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a New Resource Share&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select “Create resource share.”&lt;/li&gt;
&lt;li&gt;Assign a name (e.g., &lt;code&gt;ImageShareForOtherTeams&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add Resources to the Resource Share&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the “Resources” section, select the resource type as “EC2 Image Builder resources.”&lt;/li&gt;
&lt;li&gt;Add the image, components, or recipes you want to share.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure Principals&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In “Principals,” select specific AWS accounts or organizational units (OUs) within your AWS Organization with whom you want to share the resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Review and Create&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review the configuration and select “Create resource share.”&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 4: Verify Access from Another Account
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;In an account with which you shared the image, access the EC2 console.&lt;/li&gt;
&lt;li&gt;Go to “Shared AMIs” and confirm that the image is available for launch.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;⚠️ 💰 🤑 Before continuing, be cautious of the potential costs incurred by executing these steps and remember to delete the resources after completing the exercise ⚠️ 💰 🤑 Approx. $2 USD&lt;/p&gt;




&lt;h2&gt;
  
  
  Step-by-Step Execution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  EC2 Image Builder (Pipeline Creation):
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31dthbhw3limn74xoddy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31dthbhw3limn74xoddy.png" alt="1" width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the pipeline creation is for testing purposes, we will limit ourselves to filling in only the name and description.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7yv7l1wnag6ftc1vu01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7yv7l1wnag6ftc1vu01.png" alt="2" width="720" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The build will be manual since we are only testing the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva0f2j7qlpk3xiscaygr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fva0f2j7qlpk3xiscaygr.png" alt="3" width="720" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is possible to create images for both AMI and Docker; for simplicity, we will choose AMI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0atgjq05bi8f6vl0pl0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0atgjq05bi8f6vl0pl0l.png" alt="4" width="720" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n07srobqizjddq82j6f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5n07srobqizjddq82j6f.png" alt="5" width="720" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmok63mb2b0hmqwpzfgo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmok63mb2b0hmqwpzfgo0.png" alt="6" width="720" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pu468ct12cnagle8kgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pu468ct12cnagle8kgy.png" alt="7" width="720" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can choose an initial configuration (user data) for the instance; in this example, a server is created with a page and a message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyns0329kyt45y4abykj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyns0329kyt45y4abykj.png" alt="8" width="720" height="339"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
yum update -y
yum install -y httpd git
systemctl start httpd
systemctl enable httpd
echo "&amp;lt;h1&amp;gt;Welcome to EC2 Image Builder&amp;lt;/h1&amp;gt;" &amp;gt; /var/www/html/index.html

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdiqki382qzr3sh7lrmja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdiqki382qzr3sh7lrmja.png" alt="9" width="720" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In build components, we can select one or more components; some are already predefined, and we can also create our own.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63hs4vgkw3nyhjwia1ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63hs4vgkw3nyhjwia1ao.png" alt="10" width="720" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also select validation components, which are used to ensure that the build component has executed successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3yxds6c1tkqxi57p9tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr3yxds6c1tkqxi57p9tl.png" alt="11" width="720" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Storage:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55u1a6hq8ju2h7yesjci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55u1a6hq8ju2h7yesjci.png" alt="12" width="720" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0u62vpt0bt1ud15ie5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0u62vpt0bt1ud15ie5r.png" alt="13" width="720" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wtuxpowcscj1q83i6x6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wtuxpowcscj1q83i6x6.png" alt="14" width="720" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3atc90t3olmdk3e6ve6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3atc90t3olmdk3e6ve6.png" alt="15" width="720" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pipeline created successfully:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvz1gsbszbtltvkwztp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figvz1gsbszbtltvkwztp.png" alt="16" width="720" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2 Image Builder (Pipeline Execution):
&lt;/h3&gt;

&lt;p&gt;We run the pipeline that we created previously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujukg5mqxx4fl15qxqav.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujukg5mqxx4fl15qxqav.png" alt="17" width="720" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will create an instance where it will set up the configuration and execute the build components we selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfwj59eim6v6nzqnovl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flfwj59eim6v6nzqnovl8.png" alt="18" width="720" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie7lbnqvbr6aobqrrgw5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fie7lbnqvbr6aobqrrgw5.png" alt="19" width="720" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the image is successfully created, the instance is stopped, and the creation of the AMI begins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7gvi7dh9fis8uypls5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd7gvi7dh9fis8uypls5z.png" alt="20" width="720" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After this, the instance is terminated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp04q494q7zsorob9888.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp04q494q7zsorob9888.png" alt="21" width="720" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then an instance is created to test the image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0x5a29n4fqg56kidahz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp0x5a29n4fqg56kidahz.png" alt="22" width="720" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After completing the verification, the AMI image is created for use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi47r3qch30w4mgr9fupx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi47r3qch30w4mgr9fupx.png" alt="23" width="720" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  RAM (Resource Access Manager)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resource Creation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljt9jxhl0sz4fh3rx5ok.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljt9jxhl0sz4fh3rx5ok.png" alt="24" width="720" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Resources to share:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0rlp9zjvzifh2sijk0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft0rlp9zjvzifh2sijk0j.png" alt="25" width="720" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu9s0jhid38f5i6xngbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmu9s0jhid38f5i6xngbh.png" alt="26" width="720" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9fb4u6ajeeufl6r5p7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9fb4u6ajeeufl6r5p7e.png" alt="Image description" width="720" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a limitation of this exercise, I do not have another external account or an organization to share with, but as we can see in the image, we can share EC2 Builder resources seamlessly through RAM.&lt;/p&gt;

&lt;p&gt;This concludes the analysis and verification of the answer to this AWS Solutions Architect Professional certification question.&lt;/p&gt;




&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Question obtained from the simulator provided by &lt;a href="https://www.whizlabs.com/" rel="noopener noreferrer"&gt;Whizlabs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/what-is-image-builder.html" rel="noopener noreferrer"&gt;What is Image Builder?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/imagebuilder/latest/userguide/manage-shared-resources.html#manage-shared-resources-share" rel="noopener noreferrer"&gt;Share Image Builder resources with AWS RAM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jjoc007.com/introducci%C3%B3n-f94dab9e1058" rel="noopener noreferrer"&gt;Spanish Version&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you've enjoyed this article, feel free to give a 👏 and ⭐ to the repository.&lt;/p&gt;

&lt;p&gt;🤔 &lt;strong&gt;Follow me on social media!&lt;/strong&gt; ⏬&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;a href="https://www.youtube.com/jjoc007" rel="noopener noreferrer"&gt;https://www.youtube.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter&lt;/strong&gt;: &lt;a href="https://twitter.com/jjoc007" rel="noopener noreferrer"&gt;https://twitter.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007" rel="noopener noreferrer"&gt;https://github.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt;: &lt;a href="https://jjoc007.com" rel="noopener noreferrer"&gt;https://jjoc007.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/jjoc007/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/jjoc007/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>professional</category>
      <category>amazon</category>
    </item>
    <item>
      <title>AWS Step Functions: Using Parallel State</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Thu, 04 Jan 2024 14:46:45 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-step-functions-using-parallel-state-34dm</link>
      <guid>https://dev.to/aws-builders/aws-step-functions-using-parallel-state-34dm</guid>
      <description>&lt;p&gt;Before diving into this post, I recommend you first check out: &lt;a href="https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il"&gt;Introduction to AWS Step Functions Using Terraform as an Infrastructure as Code Tool&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The Parallel State type in AWS Step Functions is particularly relevant for scenarios where the simultaneous execution of various operations is required. This feature not only speeds up processes but also provides a flexible and scalable way to handle complex tasks. In this article, we will explain a basic example of Parallel State, providing a detailed guide on its configuration and use. Our goal is to offer a clear understanding of how to implement this powerful tool in your AWS workflows, fully leveraging its capabilities to optimize your cloud processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Use Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parallel Data Processing&lt;/strong&gt;&lt;br&gt;
In situations where large volumes of data need to be processed, Parallel State allows for dividing the workload into multiple tasks that can be executed simultaneously. This is particularly useful in big data and data analysis applications, where processing and analyzing data from multiple sources at the same time is required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Microservices and Distributed Applications&lt;/strong&gt;&lt;br&gt;
In microservices-based architectures, different components of an application may need to perform tasks in parallel. Parallel State facilitates this coordination, allowing various services to function simultaneously to complete a larger process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation of Complex Workflows&lt;/strong&gt;&lt;br&gt;
In workflows that involve multiple steps or stages, such as in approval or review processes, Parallel State can be used to execute several steps in parallel, speeding up the overall process and improving operational efficiency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simultaneous Testing and Analysis&lt;/strong&gt;&lt;br&gt;
In development and QA environments, Parallel State can be used to execute tests or analyses in parallel, reducing the total time needed for software validation or quality analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;IT Operations and Infrastructure Management&lt;/strong&gt;&lt;br&gt;
For tasks such as infrastructure deployment, system updates, or security patches, Parallel State allows multiple operations to be carried out at the same time, resulting in faster and more efficient management of IT resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Basic Example
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

{
  "Comment": "Ejemplo de Step Function con parallel state",
  "StartAt": "EstadoInicial",
  "States": {
    "EstadoInicial": {
      "Type": "Pass",
      "Result": {
        "mensajeInicial": "Inicio del flujo de trabajo"
      },
      "Next": "ProcesoParalelo"
    },
    "ProcesoParalelo": {
      "Type": "Parallel",
      "ResultPath": "$.resultadosParalelos",
      "Next": "EstadoFinal",
      "Branches": [
        {
          "StartAt": "Rama1",
          "States": {
            "Rama1": {
              "Type": "Pass",
              "Result": {
                "resultadoRama1": "Dato de la Rama 1"
              },
              "End": true
            }
          }
        },
        {
          "StartAt": "Rama2",
          "States": {
            "Rama2": {
              "Type": "Pass",
              "Result": {
                "resultadoRama2": "Dato de la Rama 2"
              },
              "End": true
            }
          }
        }
      ]
    },
    "EstadoFinal": {
      "Type": "Pass",
      "ResultPath": "$.resultadoFinal",
      "InputPath": "$.resultadosParalelos",
      "Result": {
        "mensajeFinal": "Fin del flujo de trabajo"
      },
      "End": true
    }
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F933bvdw4x6m40psieqtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F933bvdw4x6m40psieqtw.png" alt="Graphical definition"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, we present an AWS Step Function designed to demonstrate the use of a Parallel State. This workflow includes an initial state, a parallel state with two branches, and concludes with a state that converges the results of the parallel branches.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial State
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: Pass. This type of state simply passes its input to its output without modification. Here, it is used to set an initial message, indicating the start of the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: A JSON object with an initial message is assigned, for example, &lt;code&gt;{"initialMessage": "Start of the workflow"}&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Parallel Process
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: Parallel. This state allows the execution of multiple branches in parallel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ResultPath&lt;/strong&gt;: &lt;code&gt;$.parallelResults&lt;/code&gt;. This field specifies where the results of the parallel branches should be added in the input state of the next state. In this case, the results of both branches will be stored in an object called &lt;code&gt;parallelResults&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Branches&lt;/strong&gt;: Contains two branches, each with its own Pass type state.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branch 1 and Branch 2&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: Pass. Similar to the initial state, these states pass their input to their output. Here, each branch assigns a different result to a variable, such as &lt;code&gt;{"branch1Result": "Data from Branch 1"}&lt;/code&gt; and &lt;code&gt;{"branch2Result": "Data from Branch 2"}&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final State
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: Pass. This state marks the end of the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ResultPath&lt;/strong&gt;: &lt;code&gt;$.finalResult&lt;/code&gt;. Indicates where the result of this state will be stored, in this case in &lt;code&gt;finalResult&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;InputPath&lt;/strong&gt;: &lt;code&gt;$.parallelResults&lt;/code&gt;. Specifies that this state takes the results of the parallel state as input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result&lt;/strong&gt;: A JSON object with a final message is assigned, for example, &lt;code&gt;{"finalMessage": "End of the workflow"}&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this workflow, the Initial State marks the beginning and sets an initial message. Then, the Parallel Process executes two branches in parallel, each generating its own result. These results are combined and passed to the Final State, which receives them and sets a final message.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices and Recommendations
&lt;/h2&gt;

&lt;p&gt;When working with AWS Step Functions, and particularly with Parallel State, there are several best practices and recommendations that can help maximize the efficiency and effectiveness of your workflows:&lt;/p&gt;

&lt;h3&gt;
  
  
  Careful Workflow Design
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Advance Planning&lt;/strong&gt;: Carefully think about the structure of your workflow. Ensure that the use of Parallel States is truly beneficial and does not unnecessarily complicate the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task Dependencies&lt;/strong&gt;: Avoid dependencies between tasks executed in parallel. The real value of a Parallel State lies in the ability to execute independent tasks simultaneously.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Result Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Result Consolidation&lt;/strong&gt;: Properly use the &lt;code&gt;ResultPath&lt;/code&gt; field to consolidate the results of parallel tasks. Ensure that the results are combined in a way that is useful for subsequent steps.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling&lt;/strong&gt;: Design your workflow to properly handle errors in parallel tasks. Consider how a failure in one branch could affect the others and the workflow as a whole.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Optimization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancing&lt;/strong&gt;: Distribute the workload evenly among parallel tasks to avoid bottlenecks and maximize efficiency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Consider the scalability of your workflow. Ensure that it can handle increases in load without degrading performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security and Access Control
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Permissions and Roles&lt;/strong&gt;: Ensure that the functions and services used in your Step Function have the appropriate permissions, following the principle of least privilege.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditing and Monitoring&lt;/strong&gt;: Use monitoring tools and logs to audit the performance and activity of your workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Rigorous Testing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Testing&lt;/strong&gt;: Perform exhaustive testing of each component of the workflow, as well as the entire workflow, to ensure that everything works as expected, especially in failure scenarios.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Testing&lt;/strong&gt;: Test how your workflow behaves under different loads to identify potential performance or scalability issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Documentation and Maintenance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clear Documentation&lt;/strong&gt;: Maintain detailed documentation of your workflow and its configuration to facilitate maintenance and future updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regular Updates&lt;/strong&gt;: Keep your workflow updated with the latest practices and features offered by AWS Step Functions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this &lt;a href="https://github.com/jjoc007/poc_step_functions_examples/tree/main/5_parallel_state" rel="noopener noreferrer"&gt;repository&lt;/a&gt;, you will find the example ready to deploy in Terraform. Feel free to download it and give it a try.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example Repository&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007/poc_step_functions_examples/tree/main/5_parallel_state" rel="noopener noreferrer"&gt;https://github.com/jjoc007/poc_step_functions_examples/tree/main/5_parallel_state&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Official Documentation&lt;/strong&gt;: &lt;a href="https://states-language.net/spec.html#parallel-state" rel="noopener noreferrer"&gt;https://states-language.net/spec.html#parallel-state&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Related Content:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il"&gt;Introduction to AWS Step Functions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/aws-step-functions-using-the-wait-state-type-1ab1"&gt;AWS Step Functions: Using the Wait State Type&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/aws-step-functions-sending-an-email-from-a-state-5b76"&gt;AWS Step Functions: Sending an Email from a State&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/aws-step-functions-example-http-request-call-2h2b"&gt;AWS Step Functions: Example of an HTTP Request&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you've enjoyed this article, feel free to give a 👏 and ⭐ to the repository.&lt;/p&gt;

&lt;p&gt;🤔 &lt;strong&gt;Follow me on social media!&lt;/strong&gt; ⏬&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YouTube&lt;/strong&gt;: &lt;a href="https://www.youtube.com/jjoc007" rel="noopener noreferrer"&gt;https://www.youtube.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter&lt;/strong&gt;: &lt;a href="https://twitter.com/jjoc007" rel="noopener noreferrer"&gt;https://twitter.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007" rel="noopener noreferrer"&gt;https://github.com/jjoc007&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium&lt;/strong&gt;: &lt;a href="https://jjoc007.com" rel="noopener noreferrer"&gt;https://jjoc007.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/jjoc007/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/jjoc007/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>programming</category>
      <category>amazon</category>
    </item>
    <item>
      <title>Mastering Docker: Defining Health Checks in Docker Compose</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Fri, 29 Dec 2023 02:49:08 +0000</pubDate>
      <link>https://dev.to/jjoc007/mastering-docker-defining-health-checks-in-docker-compose-4l5k</link>
      <guid>https://dev.to/jjoc007/mastering-docker-defining-health-checks-in-docker-compose-4l5k</guid>
      <description>&lt;p&gt;In this article, we will explore an important aspect of Docker Compose, specifically focusing on health checks, an essential tool for ensuring the reliability and robustness of containerized services. We will cover everything from the basics of what they are and why they are important, to how to practically implement them in your Docker Compose definitions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context on Docker and Docker Compose
&lt;/h2&gt;

&lt;p&gt;In the dynamic world of information technology, virtualization and containerization have revolutionized the way we deploy and manage applications. Docker emerges as a leading solution, enabling developers and system administrators to package, distribute, and manage applications efficiently. Docker encapsulates applications in containers, lightweight and portable environments that ensure consistency across different development, testing, and production environments.&lt;/p&gt;

&lt;p&gt;Docker Compose, a tool within the Docker ecosystem, simplifies the process of defining and sharing multi-container applications. With Docker Compose, you can define all the services that make up your application in a single docker-compose.yml file. This tool facilitates the orchestration of multiple containers, allowing them to work together harmoniously to form complex applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Concepts
&lt;/h3&gt;

&lt;h4&gt;
  
  
  What are Health Checks?
&lt;/h4&gt;

&lt;p&gt;In the context of containerization and Docker, health checks are periodic tests or verifications used to determine whether a container is functioning correctly. Essentially, a health check is a way to automatically monitor the health status of a service or application within a container. If a service fails or stops responding, the health check can detect it and take appropriate actions, such as restarting the container or alerting the operations team.&lt;/p&gt;

&lt;p&gt;Health checks are essential in production and development environments because they ensure that applications continue to function optimally and reliably. In an environment with multiple services and containers, health checks provide an additional layer of security by ensuring that each individual service is functioning as expected.&lt;/p&gt;

&lt;h4&gt;
  
  
  How They Work in Docker
&lt;/h4&gt;

&lt;p&gt;Docker incorporates a health check system that allows users to define commands or instructions to check the status of a container. These commands can be as simple as an HTTP request to an application endpoint or a script that checks the availability of an internal service.&lt;/p&gt;

&lt;p&gt;When a health check fails, Docker marks the container as unhealthy. This information can be used by orchestration and monitoring tools to make automated decisions, such as restarting the container or redistributing load among healthy containers.&lt;/p&gt;

&lt;p&gt;Docker's health checks offer flexibility in terms of configuration, allowing users to define intervals between checks, the number of attempts before considering a service unhealthy, and the maximum time a check should take before being considered failed.&lt;/p&gt;

&lt;p&gt;This functionality is crucial for maintaining high availability and reliability of applications, especially in microservices environments where multiple interdependent services must operate in sync.&lt;/p&gt;

&lt;p&gt;In the next section, we will delve into how health checks are integrated and configured in Docker Compose, providing practical examples for their implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Health Checks in Docker Compose
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Integration with Docker Compose
&lt;/h3&gt;

&lt;p&gt;Docker Compose, a tool for defining and running multi-container Docker applications, natively integrates the concept of health checks into its configurations. Through docker-compose.yml files, Docker Compose enables developers and system administrators to specify how the health check for each service in their environment should be performed.&lt;/p&gt;

&lt;p&gt;The integration of health checks in Docker Compose is crucial for automating the monitoring and management of service health. In complex environments with multiple containers, health checks become a vital tool to ensure that each service is operating correctly and to facilitate automatic recovery in case of failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuration Parameters
&lt;/h3&gt;

&lt;p&gt;In a docker-compose.yml file, the configuration of a health check is done within the definition of each service. The key parameters include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;test&lt;/code&gt;: The command that will be executed to check the status of the service. It can be a string or a list. For example, &lt;code&gt;["CMD", "curl", "-f", "http://localhost/health"]&lt;/code&gt; for a web application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;interval&lt;/code&gt;: The time between each check. For example, &lt;code&gt;30s&lt;/code&gt; indicates that the health check is performed every 30 seconds.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timeout&lt;/code&gt;: The maximum time a health check can take before being considered failed. For example, &lt;code&gt;10s&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;retries&lt;/code&gt;: The number of times a failed health check will be attempted before marking the service as unhealthy.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;start_period&lt;/code&gt;: An initial period during which the results of failed health checks are not counted towards the maximum number of retries. This is useful for applications that require time to start.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These parameters allow for detailed configuration tailored to the specific needs of each service, providing precise control over how and when a service should be considered unhealthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example:
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

 service1:
    build: ./web-server
    environment:
      - PORT=3000
    ports:
      - "3000:3000"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/ping"]
      interval: 2s
      timeout: 60s
      retries: 20


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This Docker Compose example illustrates the configuration of a health check for a service named &lt;code&gt;service1&lt;/code&gt;, which represents a web server. The health check definition is made in the docker-compose.yml file and is composed of several key components:&lt;/p&gt;

&lt;h3&gt;
  
  
  Health Check Test (&lt;code&gt;test&lt;/code&gt;):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;["CMD", "curl", "-f", "http://localhost:3004/ping"]&lt;/code&gt;: This line defines the command that will be executed for the health check. It uses curl to make an HTTP GET request to the /ping path on port 3004 of localhost. The -f option causes curl to fail if the HTTP status code is 400 or higher, which is useful for detecting errors. This command checks if the web service is responding correctly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Interval (&lt;code&gt;interval&lt;/code&gt;):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;2s&lt;/code&gt;: This specifies that the health check will be executed every 2 seconds. That is, every 2 seconds, Docker Compose will execute the curl command to check the status of the service.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Timeout (&lt;code&gt;timeout&lt;/code&gt;):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;60s&lt;/code&gt;: This is the maximum time Docker will wait for the health check command to complete. If the curl command does not respond within 60 seconds, it will be considered a health check failure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Retries (&lt;code&gt;retries&lt;/code&gt;):
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;20&lt;/code&gt;: Indicates the number of times Docker will retry the health check before marking the service as unhealthy. In this case, if the health check fails 20 consecutive times, Docker will consider the &lt;code&gt;service1&lt;/code&gt; service to be unhealthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example of a Service Dependent on Another:
&lt;/h2&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

service2:
    build: ./web-server
    environment:
      - PORT=3001
    ports:
      - "3001:3001"
    depends_on:
      service1:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3001/ping"]
      interval: 2s
      timeout: 60s
      retries: 20


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In this Docker Compose example, we have two services: &lt;code&gt;service1&lt;/code&gt; and &lt;code&gt;service2&lt;/code&gt;. Both are configured with health checks, but the interesting aspect here is the dependency of &lt;code&gt;service2&lt;/code&gt; on the health status of &lt;code&gt;service1&lt;/code&gt;, indicated by the &lt;code&gt;depends_on&lt;/code&gt; clause.&lt;/p&gt;

&lt;h3&gt;
  
  
  Health Check-Based Dependency (&lt;code&gt;depends_on&lt;/code&gt;):
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;service2&lt;/code&gt; uses the &lt;code&gt;depends_on&lt;/code&gt; option to establish a dependency on &lt;code&gt;service1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;condition: service_healthy&lt;/code&gt;: This line specifies that &lt;code&gt;service2&lt;/code&gt; should only start after &lt;code&gt;service1&lt;/code&gt; has been considered "healthy" by Docker.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Operation of &lt;code&gt;service1&lt;/code&gt;'s Health Check in Relation to &lt;code&gt;service2&lt;/code&gt;:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Starting of &lt;code&gt;service1&lt;/code&gt;&lt;/strong&gt;: When the set of services is initiated with Docker Compose, &lt;code&gt;service1&lt;/code&gt; begins its initialization process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution of &lt;code&gt;service1&lt;/code&gt; Health Check&lt;/strong&gt;: While &lt;code&gt;service1&lt;/code&gt; is running, Docker performs health checks at intervals of 2 seconds (as per its configuration). If the service successfully responds to the health check command (in this case, a curl to &lt;code&gt;http://localhost:3000/ping&lt;/code&gt;), it continues operating normally. If it fails, Docker will retry the health check up to 20 times, with a 60-second timeout per attempt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health Status of &lt;code&gt;service1&lt;/code&gt; and Its Effect on &lt;code&gt;service2&lt;/code&gt;&lt;/strong&gt;: If &lt;code&gt;service1&lt;/code&gt; passes the health check and is marked as healthy, then Docker proceeds to start &lt;code&gt;service2&lt;/code&gt;. If &lt;code&gt;service1&lt;/code&gt; fails its health checks and is marked as unhealthy, &lt;code&gt;service2&lt;/code&gt; will not start until &lt;code&gt;service1&lt;/code&gt; is considered healthy.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example of Starting Up When &lt;code&gt;Service1&lt;/code&gt; is Healthy:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0doac5css8j6l80bkx18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0doac5css8j6l80bkx18.png" alt="service 1 healthy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Example of Starting Up When &lt;code&gt;Service1&lt;/code&gt; is Unhealthy:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4nwj49vnqwn4alsbu0x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4nwj49vnqwn4alsbu0x.png" alt="Service 2 Unhealthy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This health-based dependency model is crucial in environments where services need to interact with each other and where the availability of one service depends on the state of another. It ensures that &lt;code&gt;service2&lt;/code&gt; will not attempt to operate until &lt;code&gt;service1&lt;/code&gt;, on which it depends, is fully functional and ready to handle requests or interact with other services. This improves the stability and reliability of the overall application deployment.&lt;/p&gt;

&lt;p&gt;In this &lt;a href="https://github.com/jjoc007/poc_docker_compose_multiservice" rel="noopener noreferrer"&gt;repository&lt;/a&gt;, you will find the example ready to be deployed with Terraform. Feel free to download and try it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Example Repository&lt;/strong&gt;: &lt;a href="https://github.com/jjoc007/poc_docker_compose_multiservice" rel="noopener noreferrer"&gt;https://github.com/jjoc007/poc_docker_compose_multiservice&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Official Documentation&lt;/strong&gt;: &lt;a href="https://docs.docker.com/compose/compose-file/compose-file-v3/" rel="noopener noreferrer"&gt;Docker Compose File Reference&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you liked this article, don't hesitate to give it a 👏 and ⭐ the repository.&lt;/p&gt;

&lt;p&gt;🤔 &lt;strong&gt;Follow me on social media!&lt;/strong&gt; ⏬&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/jjoc007" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://twitter.com/jjoc007" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/jjoc007" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://jjoc007.com" rel="noopener noreferrer"&gt;Website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/jjoc007/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>sre</category>
      <category>devops</category>
      <category>programming</category>
      <category>docker</category>
    </item>
    <item>
      <title>Mastering Terraform: How to Manage Multiple Environments with Dynamic S3 Backends</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Wed, 20 Dec 2023 03:42:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/mastering-terraform-how-to-manage-multiple-environments-with-dynamic-s3-backends-1p9</link>
      <guid>https://dev.to/aws-builders/mastering-terraform-how-to-manage-multiple-environments-with-dynamic-s3-backends-1p9</guid>
      <description>&lt;h2&gt;
  
  
  What is Terraform?
&lt;/h2&gt;

&lt;p&gt;Terraform is an infrastructure as code (IaC) tool developed by HashiCorp. It is used to define and provision IT infrastructure using a declarative configuration language or, in more recent versions, through JSON as well. Terraform allows users to define resources both in the cloud (such as servers, storage, and networks) and on-premises (such as virtual machines and services).&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of Terraform
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Idempotence&lt;/strong&gt;: Terraform ensures that multiple executions of the same configuration files produce the same end-state, avoiding inconsistencies and potential errors in the infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Management&lt;/strong&gt;: Terraform keeps a record of the current state of the infrastructure, which facilitates incremental changes and automation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cloud and On-Premise Support&lt;/strong&gt;: Compatible with numerous cloud service providers, like AWS, Azure, and Google Cloud, as well as on-premise solutions, allowing users to efficiently manage a complex hybrid environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modules and Templates&lt;/strong&gt;: Enables the reuse of configurations through modules, improving efficiency and consistency in infrastructure management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Importance of Terraform in Infrastructure Management
&lt;/h3&gt;

&lt;p&gt;Terraform has rapidly gained popularity in the world of software development and systems management due to its ability to handle large infrastructures efficiently and predictably. Its declarative nature and ability to integrate with a wide range of cloud service providers make it an indispensable tool for companies looking to adopt DevOps practices and infrastructure automation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backends in Terraform
&lt;/h2&gt;

&lt;p&gt;Backends in Terraform play a crucial role in managing the state of the infrastructure. In Terraform, the state is a record of the managed infrastructure that maintains information about the configured resources and their properties. Backends determine both the location of this state and the locking method to prevent state conflicts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of Backends in Terraform
&lt;/h3&gt;

&lt;p&gt;Terraform offers various types of backends, mainly classified into two categories: local and remote.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Local Backends&lt;/strong&gt;: Store the state in a file on the local system. They are simple and easy to use but are not suitable for team collaboration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote Backends&lt;/strong&gt;: Save the state to a remote service. These are ideal for teams as they allow state sharing and locking of files to prevent conflicts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The AWS S3 Backend
&lt;/h3&gt;

&lt;p&gt;Among remote backends, the AWS S3 backend is one of the most popular. This backend uses Amazon S3 services to store the state file and can optionally be integrated with DynamoDB for state locking and consistency.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages of the S3 Backend:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Durability and Scalability&lt;/strong&gt;: S3 offers high durability and scalability, ensuring the security and accessibility of Terraform's state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Access Control&lt;/strong&gt;: Integration with AWS IAM allows for detailed control of state access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: When used with DynamoDB, it ensures state consistency and prevents conflicts in team environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Importance of Backends in State Management
&lt;/h3&gt;

&lt;p&gt;The choice of backend directly affects how Terraform's state is managed, especially in team environments and on a large scale. An appropriate backend ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Protection against data loss and unauthorized access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effective Collaboration&lt;/strong&gt;: Allows multiple users to work on the same infrastructure without overwriting changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation&lt;/strong&gt;: Facilitates integration with CI/CD systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example of S3 Backend Configuration in Terraform
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "nombre-del-bucket-s3"
    key    = "path/del/estado/terraform.tfstate"
    region = "us-east-1"
    encrypt = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Components of the Configuration:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;bucket&lt;/strong&gt;: The name of the Amazon S3 bucket where the Terraform state will be stored.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;key&lt;/strong&gt;: The location within the bucket where the Terraform state file (&lt;code&gt;.tfstate&lt;/code&gt;) will be saved.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;region&lt;/strong&gt;: The AWS region where the S3 bucket is located.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;encrypt&lt;/strong&gt;: Enables encryption on the AWS server for the state file stored in S3.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenge in Managing Multiple Environments in Terraform
&lt;/h2&gt;

&lt;p&gt;One of the most common challenges when working with Terraform, especially in large organizations or complex projects, is the efficient management of multiple infrastructure environments, such as development, testing, and production. Each of these environments may have different configurations and needs, and it is essential to keep them isolated to prevent interference and errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Problems with a Single Backend
&lt;/h3&gt;

&lt;p&gt;When a single backend is used for all environments, several problems can arise:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;State Conflict&lt;/strong&gt;: If all environments share the same state file, there is a significant risk of conflicts and accidental overwrites.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security and Access Control&lt;/strong&gt;: A single backend may not provide the necessary level of differentiated access control for different environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Difficulties in Automation&lt;/strong&gt;: The automation of deployments becomes more complex and error-prone when multiple environments interact with a single state.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Need for Isolation and Flexibility
&lt;/h3&gt;

&lt;p&gt;Isolation is crucial to ensure that configurations from one environment do not affect others. Moreover, each environment may require different backend configurations in terms of storage regions, access policies, and other specific security and performance settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability and Maintenance Challenge
&lt;/h3&gt;

&lt;p&gt;As a project grows, maintaining a single backend becomes unsustainable. Scalability and maintenance become significant issues, and efficient state management becomes more challenging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution with Multiple S3 Backends
&lt;/h2&gt;

&lt;p&gt;The solution to the challenges presented in managing multiple environments in Terraform lies in the implementation of multiple S3 backends. This strategy involves setting up a unique S3 backend for each environment (development, testing, production, etc.), using Terraform's &lt;code&gt;-backend-config&lt;/code&gt; parameter.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the &lt;code&gt;-backend-config&lt;/code&gt; Parameter
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;-backend-config&lt;/code&gt; parameter allows Terraform users to specify a backend configuration file for each initialization. This enables a clear separation of the states for each environment, ensuring that operations in one environment do not affect others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example of Dynamic Configuration
&lt;/h3&gt;

&lt;p&gt;In the main Terraform file (&lt;code&gt;main.tf&lt;/code&gt;), a generic backend is defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt;4.0"
    }
  }

  backend "s3" {}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, specific configuration files for each environment are created in an &lt;code&gt;env&lt;/code&gt; folder:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryyc11lw66mni8vrwnen.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryyc11lw66mni8vrwnen.png" alt="Project folders" width="408" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Content of an Environment Configuration File (&lt;code&gt;backend_s3_dev.hcl&lt;/code&gt;):
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;key    = "dev-state/terraform.tfstate"
bucket = "dev-bucket"
region = "us-east-1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Content of an Environment Configuration File (&lt;code&gt;backend_s3_prod.hcl&lt;/code&gt;):
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;key    = "prod-state/terraform.tfstate"
bucket = "prod-bucket"
region = "us-east-1"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initialization and Application
&lt;/h3&gt;

&lt;p&gt;To initialize Terraform for a specific environment, the following command is used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init -backend-config="env/backend_s3_dev.hcl"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command sets up the backend for the development environment, using the &lt;code&gt;backend_s3_dev.hcl&lt;/code&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of This Approach
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;State Isolation&lt;/strong&gt;: Each environment has its own state, stored in a separate S3 bucket, preventing conflicts and overwrites.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security&lt;/strong&gt;: Specific access policies can be applied for each environment, improving security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Flexibility&lt;/strong&gt;: Allows for individual adjustments in storage settings and regions for each environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Automation&lt;/strong&gt;: This setup facilitates integration with CI/CD systems, enabling safe and efficient automated deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this &lt;a href="https://github.com/jjoc007/poc_terraform_with_multiples_s3_backend" rel="noopener noreferrer"&gt;repository&lt;/a&gt;, you will see the example ready to be deployed in Terraform. Feel free to download it and try it out.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Example Repository: &lt;a href="https://dev.topoc_terraform_with_multiples_s3_backend"&gt;https://github.com/jjoc007/poc_terraform_with_multiples_s3_backend&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Official Documentation: &lt;a href="https://dev.toTerraform%20backend%20configuration"&gt;https://developer.hashicorp.com/terraform/language/settings/backends/configuration&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you liked this article, feel free to give it a 👏 and ⭐ the repository.&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
      <category>devops</category>
      <category>programming</category>
    </item>
    <item>
      <title>AWS Step Functions: Example HTTP Request Call"</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Mon, 18 Dec 2023 19:19:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-step-functions-example-http-request-call-2h2b</link>
      <guid>https://dev.to/aws-builders/aws-step-functions-example-http-request-call-2h2b</guid>
      <description>&lt;p&gt;In the dynamic world of cloud computing, integration and automation are keys to success. AWS Step Functions stands as a powerful orchestrator of microservices, enabling developers to weave complex sequences of tasks with ease and efficiency. Among its crown jewels, HTTP Tasks shine on their own, offering unmatched versatility and integration capabilities.&lt;/p&gt;

&lt;p&gt;What are HTTP Tasks? Essentially, they are states within an AWS Step Functions state machine designed to interact with third-party APIs. Think of them as bridges connecting your workflow in AWS with the vast universe of available web services. With HTTP Tasks, you can call external APIs like Salesforce, Stripe, or even more customized services, seamlessly integrating into your workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connection Flexibility&lt;/strong&gt;: Whether you need to fetch data, send information, or initiate processes in external systems, HTTP Tasks allow you to do so with just a few clicks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Robust Security&lt;/strong&gt;: The integration of AWS IAM and EventBridge ensures that your API calls are secure and managed with AWS's best practices in terms of authentication and authorization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Advanced Customization&lt;/strong&gt;: From specifying HTTP methods (GET, POST, and more) to adjusting headers and parameters, you have total control over how you interact with external APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring and Scalability&lt;/strong&gt;: Benefit from integrated monitoring and the ability to scale automatically to handle workloads of any size.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article will guide you through a fascinating journey into the world of HTTP Tasks in AWS Step Functions. We explore realistic use cases, crucial aspects of implementation, and conclude with a practical example: notifying a Slack webhook about the status of a process. Get ready to unlock new possibilities and take your cloud workflows to the next level.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases:
&lt;/h3&gt;

&lt;p&gt;HTTP Tasks in AWS Step Functions open up a range of possibilities for service integration and process automation. This section explores various practical and powerful use cases where HTTP Tasks can be a transformative tool.&lt;/p&gt;

&lt;h4&gt;
  
  
  Integration with CRM and ERP Systems
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer and Order Management&lt;/strong&gt;: Automate data synchronization between your AWS system and CRM platforms like Salesforce. For instance, you can automatically update customer records in Salesforce when changes occur in your AWS system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Process Automation&lt;/strong&gt;: Connect your workflows with ERP systems to manage inventory, orders, and logistics, enabling real-time integration that optimizes business operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Interactions with Payment Platforms
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Payment Processing&lt;/strong&gt;: Integrate services like Stripe or PayPal to automate payment processing. HTTP Tasks can send transaction data and receive payment confirmations, streamlining the sales cycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subscription Management&lt;/strong&gt;: Automate the creation and management of subscriptions, as well as updating customer details in online payment systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Notifications and Alerts
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sending Notifications&lt;/strong&gt;: Use services like Twilio or SendGrid to send SMS or emails as part of a workflow, for example, for transaction alerts or order status notifications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Slack&lt;/strong&gt;: Send automatic messages to Slack channels or users to notify about project updates, system alerts, or data summaries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Data Analysis and Reporting
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Analysis Tools&lt;/strong&gt;: Extract data from AWS for analysis in external platforms like Google Analytics or BI tools, allowing for deeper understanding and data-driven action.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Reporting&lt;/strong&gt;: Generate and distribute automatic reports using data processed by your AWS workflow, sending them to reporting tools or storing them in file systems like Google Drive.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Artificial Intelligence and Machine Learning Services
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interaction with AI/ML Models&lt;/strong&gt;: Integrate external AI services like IBM Watson or Google AI to enrich your data with machine learning capabilities, sentiment analysis, or image recognition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Machine Learning Process Automation&lt;/strong&gt;: Orchestrate and manage ML workflows, from data preparation to training and evaluating models, using various ML APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cloud Resource Management
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration of Cloud Services&lt;/strong&gt;: Automate the creation, update, and deletion of cloud resources, interacting with services like AWS EC2 or Google Cloud Platform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Event Response&lt;/strong&gt;: Automatically respond to cloud system events, such as launching EC2 instances or performance alerts, using workflows that include calls to relevant APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Integration of Social Media and Marketing
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Social Media Posts&lt;/strong&gt;: Schedule and publish content on social media platforms like Twitter or Facebook, as part of digital marketing strategies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Basic Example:
&lt;/h3&gt;

&lt;p&gt;To illustrate how an HTTP Task is implemented in AWS Step Functions, let's consider a concrete example. Imagine that we need to make a GET call to an external API to fetch information. Below is the code for the HTTP Task and an explanation of each property involved.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "Ejemplo de Tarea HTTP en AWS Step Functions",
  "StartAt": "CallExternalAPI",
  "States": {
    "CallExternalAPI": {
      "Type": "Task",
      "Resource": "arn:aws:states:::http:invoke",
      "Parameters": {
        "ApiEndpoint": "https://api.externalservice.com/data",
        "Method": "GET",
        "Headers": {
          "Content-Type": "application/json"
        },
  "Authentication": {
          "ConnectionArn": "arn:aws:events:region:account-id:event-bus/default"
        }

      },
      "End": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Properties
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comment&lt;/strong&gt;: A description or comment about the purpose of the state machine. Useful for documentation and clarity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;StartAt&lt;/strong&gt;: Indicates the first state that will be executed in the state machine. In this case, it's &lt;code&gt;CallExternalAPI&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;States&lt;/strong&gt;: Defines the different states of the machine. Here we only have one state, &lt;code&gt;CallExternalAPI&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CallExternalAPI&lt;/strong&gt;: The name of the state we are defining.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type&lt;/strong&gt;: Defines the type of state. In this case, it's a &lt;code&gt;Task&lt;/code&gt;, which means it performs a specific task.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource&lt;/strong&gt;: Specifies the type of resource that the task will use. Here &lt;code&gt;arn:aws:states:::http:invoke&lt;/code&gt; is used to indicate that it is an HTTP Task.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parameters&lt;/strong&gt;: Defines the necessary parameters for the HTTP task.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ApiEndpoint&lt;/strong&gt;: The URL of the external API to which the call will be made.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Method&lt;/strong&gt;: The HTTP method to be used. Here it is &lt;code&gt;GET&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headers&lt;/strong&gt;: Necessary HTTP headers for the request. In this case, it indicates that the content is of JSON type.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Authentication&lt;/strong&gt;: This field is used to specify authentication details for the API call.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ConnectionArn&lt;/strong&gt;: The Amazon Resource Name (ARN) of the EventBridge connection that manages authentication credentials. This connection must be pre-configured in EventBridge and contain the necessary information to authenticate with the API (e.g., an API token).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;End&lt;/strong&gt;: Indicates if this is the last state of the state machine. If &lt;code&gt;true&lt;/code&gt;, the state machine will end after executing this state.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Practical Example: Simple Notification to Slack via Webhook
&lt;/h3&gt;

&lt;p&gt;In today's digital era, instant communication and notification are essential for efficiency and team collaboration. Slack, one of the leading platforms in business communication, offers robust functionality to integrate personalized notifications through webhooks. Combining this with AWS Step Functions allows us to automate notifications and keep teams informed about key events and processes in real time.&lt;/p&gt;

&lt;p&gt;In this section, we will tackle a practical and highly relevant example: sending a simple notification to a Slack channel using a webhook. This task, seemingly straightforward, encapsulates fundamental concepts of integration and automation in the cloud, demonstrating how modern tools can work together efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca7nno2olfnad61w26ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fca7nno2olfnad61w26ja.png" alt="Example architecture " width="528" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps for Creating a Webhook in Slack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a Channel for Receiving Notifications&lt;/strong&gt;: Start by creating a channel where you will receive the notifications. In my case, I named it &lt;code&gt;slack-notification-test&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjbhusv8c5fp4mhyk7ox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftjbhusv8c5fp4mhyk7ox.png" alt="slack channel" width="323" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a Workflow in Integrations&lt;/strong&gt;: Next, go to the integrations section and create a workflow. In this example, we'll name it &lt;code&gt;bot-auto-not&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbxrlnt268f3aj0zeqh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbxrlnt268f3aj0zeqh1.png" alt="slack channel integration" width="579" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Workflow Creation with Webhook Trigger&lt;/strong&gt;: In the workflow creation process, choose to start it via a webhook. Following this, you will be asked to define the variables that will be received when consuming the webhook. In this example, the variables are &lt;code&gt;type&lt;/code&gt; and &lt;code&gt;text&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw8wjnigbc9kq8arnihk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw8wjnigbc9kq8arnihk.png" alt="Detail workflow" width="579" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Adding an Additional Step for Message Sending&lt;/strong&gt;: After setting up the webhook, create an additional step to send a message to the channel using the information from the defined variables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegmhg4i9spyr62e0xs6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fegmhg4i9spyr62e0xs6g.png" alt="send message step of workflow" width="643" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Finding the Webhook URL&lt;/strong&gt;: In the webhook step, you will find the URL that will be used to send notifications. It will be in this format: &lt;code&gt;https://hooks.slack.com/triggers/11111111/222222222/333333333&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testing the Webhook&lt;/strong&gt;: You can test the webhook by making a POST request and sending the parameters defined in the workflow in the body of the request.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location 'https://hooks.slack.com/triggers/1111/22222/33333 \
--header 'Content-type: application/json' \
--data '{
  "text": "Example text",
  "type": "ERROR"
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8olbebrq0jfbpkp5ew1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8olbebrq0jfbpkp5ew1j.png" alt="Message example" width="472" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Starting with the Step Function Implementation&lt;/strong&gt;: With all these components defined, we can now begin with the implementation of the Step Function.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Steps for Creating the Step Function via Terraform:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Create a Role and Policies for the Step Function&lt;/strong&gt;: The first step is to create a role and policies that will be assumed by the Step Function, enabling it to make HTTP calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Role Resource&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "step_functions_role" {
  name = "step_functions_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "states.amazonaws.com"
        }
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Policies Resource for role&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_policy" "sfn_logging_policy" {
  name        = "SFNLoggingPolicy"
  description = "Allow Step Functions to log to CloudWatch Logs."

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect   = "Allow",
        Action   = [
          "logs:*"
        ],
        Resource = "*"
      }
    ]
  })
}

resource "aws_iam_policy" "sf_invoke_requests_policy" {
  name        = "SFSendInvokeRequestsPolicy"
  description = "Allow Step Functions to invoke requests."

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect   = "Allow",
        Action   = [
          "states:InvokeHTTPEndpoint"
        ],
        Resource = "*"
      },
      {
        Effect   = "Allow",
        Action   = [
          "events:RetrieveConnectionCredentials"
        ],
        Resource = "*"
      },
      {
        Effect   = "Allow",
        Action   = [
          "secretsmanager:GetSecretValue",
          "secretsmanager:DescribeSecret"
        ],
        Resource = "*"
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are two key policies to set up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Management Policy&lt;/strong&gt;: This policy is for the management of logs.&lt;br&gt;
&lt;strong&gt;HTTP Invocation and Secrets Access Policy&lt;/strong&gt;: The second policy grants the Step Function the ability to invoke HTTP services and read secrets, which is essential in cases where the requests require authentication.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Creation of the EventBridge Connection&lt;/strong&gt;: AWS EventBridge is a serverless event bus service that facilitates the connection of applications with data from different sources. A key feature is the ability to establish "connections" that allow AWS Step Functions to securely communicate with third-party APIs, such as Slack, Salesforce, Stripe, among others. These connections are used, for example, in HTTP Tasks to manage authentications and authorizations securely and efficiently. How it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Creating a Connection&lt;/strong&gt;: In EventBridge, you define a "connection" that encapsulates the authentication details necessary to communicate with an external API. This connection can include information such as access tokens, API keys, or basic authentication credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Secure Credential Storage&lt;/strong&gt;: EventBridge securely stores credentials using AWS Secrets Manager. This means that the credentials are encrypted and protected, and are not exposed in the task definitions or in the code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Step Functions&lt;/strong&gt;: When defining an HTTP Task in AWS Step Functions, you specify the ARN (Amazon Resource Name) of the EventBridge connection. Step Functions uses this connection to authenticate with the external API during the task execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authorization Management&lt;/strong&gt;: The connection handles the authorization process with the external API. This frees developers from the burden of implementing and maintaining custom code for managing tokens or API keys.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Event bridge connection Resource&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_event_connection" "slack_test_webhook" {
  name               = "test-slack-notification-connection"
  description        = "test-slack-notification-connection"
  authorization_type = "API_KEY"

  auth_parameters {
    api_key {
      key   = "not-key"
      value = "None"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Necessity of the Connection Even Without Authentication for the Webhook&lt;/strong&gt;: This connection is required even though the webhook does not handle authentication. For this reason, we input a random API key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aws Step function definition&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "Send a slack notification via sf",
  "StartAt": "FirstStep",
  "States": {
    "FirstStep": {
      "Type": "Pass",
      "Result": {
        "type": "INFO",
        "text": "Message from Step Functions"
      },
      "ResultPath": "$.notificationData",
      "Next": "SendNotification"
    },
    "SendNotification": {
      "Type": "Task",
      "Resource": "arn:aws:states:::http:invoke",
      "Parameters": {
        "Authentication": {
          "ConnectionArn": "${aws_cloudwatch_event_connection.slack_test_webhook.arn}"
        },
        "RequestBody": {
          "text.$": "$.notificationData.text",
          "type.$": "$.notificationData.type"
        },
        "ApiEndpoint": "https://hooks.slack.com/triggers/11111111/2222222222/33333333333",
        "Method": "POST"
      },
      "Next": "LastStep"
    },
    "LastStep": {
      "Type": "Succeed"
    }
  }
}
EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Three States in the Step Function&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;FirstStep&lt;/strong&gt;: This state generates the data that will be sent in the notification. The data is sent through the &lt;code&gt;notificationData&lt;/code&gt; object.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SendNotification&lt;/strong&gt;: This state is responsible for sending the notification. Notice that&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0omyhj5zja3s12xwb2j5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0omyhj5zja3s12xwb2j5.png" alt="AWS Step function working" width="720" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After going through the process of setting up a simple notification to Slack via a webhook using AWS Step Functions, it's clear that we are dealing with a powerful combination of tools that can revolutionize the way we interact with workflows and team communication.&lt;/p&gt;

&lt;p&gt;If you want to see the complete example to run it in Terraform, here's the &lt;a href="https://github.com/jjoc007/poc_step_functions_examples/tree/main/3_call_http_service" rel="noopener noreferrer"&gt;repository&lt;/a&gt; for you to check out.&lt;/p&gt;

&lt;p&gt;If you liked this article, don't hesitate to give it a 👏 and a ⭐ on the repository.&lt;/p&gt;

&lt;p&gt;Thank you!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>programming</category>
      <category>stepfunctions</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AWS Step Functions: Sending an Email from a State</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Tue, 12 Dec 2023 04:21:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-step-functions-sending-an-email-from-a-state-5b76</link>
      <guid>https://dev.to/aws-builders/aws-step-functions-sending-an-email-from-a-state-5b76</guid>
      <description>&lt;p&gt;In today's agile and always-connected world, automation and workflow orchestration are crucial for operational efficiency. AWS Step Functions is a service that simplifies the coordination of distributed application components and microservices using visual workflows. One of the most powerful features of Step Functions is its direct integration with various AWS services through AWS SDK. This functionality allows developers to quickly build complex applications with less code and maintenance. In this post, we will explore how this integration works, specifically focusing on how to send emails using Amazon Simple Email Service (SES) V2.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is and How Does an AWS SDK Integration from Step Function Work?
&lt;/h2&gt;

&lt;p&gt;The integration of AWS SDK into Step Functions allows users to make direct calls to AWS services without the need for writing a custom Lambda function for the interaction. This means you can use AWS APIs directly within your Step Functions state machine definition.&lt;/p&gt;

&lt;p&gt;When you configure a state in your state machine to perform an AWS SDK operation, you simply specify the AWS service and the action you wish to use, along with the necessary parameters for that API call. Step Functions handles the API request and response, including retry attempts and error handling.&lt;/p&gt;

&lt;p&gt;Official documentation for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html"&gt;AWS SDK Services Support&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/connect-to-resource.html"&gt;Connect to a Resource Example&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Use Cases for This Type of Integration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data Processing:&lt;/strong&gt; Execute tasks such as data transformations and analytics using services like AWS Glue or Amazon EMR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure Automation:&lt;/strong&gt; Orchestrate changes in AWS infrastructure, such as updating AWS CloudFormation stacks or launching Amazon EC2 instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Integrations:&lt;/strong&gt; Communicate with other AWS services to send notifications with Amazon SNS, message queues with Amazon SQS, or store files in Amazon S3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Management:&lt;/strong&gt; Automate the creation and management of users in AWS Identity and Access Management (IAM).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Sending an Email with SES V2
&lt;/h2&gt;

&lt;p&gt;Imagine that you need to send confirmation emails to users after they complete an action in your application. Instead of invoking a Lambda function that in turn calls SES, you can use Step Functions to call SES directly.&lt;/p&gt;

&lt;p&gt;Here is an example of how to send an email using &lt;code&gt;arn:aws:states:::aws-sdk:sesv2:sendEmail&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "StartAt": "SendEmail",
  "States": {
    "SendEmail": {
      "Type": "Task",
      "Resource": "arn:aws:states:::aws-sdk:sesv2:sendEmail",
      "Parameters": {
        "FromEmailAddress": "sender@example.com",
        "Destination": {
          "ToAddresses": ["recipient@example.com"]
        },
        "Content": {
          "Simple": {
            "Subject": {
              "Data": "Your confirmation email",
              "Charset": "UTF-8"
            },
            "Body": {
              "Text": {
                "Data": "Thank you for your action.",
                "Charset": "UTF-8"
              }
            }
          }
        }
      },
      "End": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this SendEmail state, we have defined a task that utilizes the resource &lt;code&gt;arn:aws:states:::aws-sdk:sesv2:sendEmail&lt;/code&gt;. The Parameters specify the sender's email, the recipient, and the content of the email, including both the subject and the body.&lt;/p&gt;

&lt;p&gt;This approach simplifies the architecture of your application, as there is no need to code and maintain a Lambda function for tasks that can be handled directly by Step Functions. Furthermore, error handling and retry mechanisms can be managed within the state machine's definition, providing a robust and reliable solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Can You Find the Parameters for a Specific Integration?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;On this page &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html"&gt;https://docs.aws.amazon.com/step-functions/latest/dg/supported-services-awssdk.html&lt;/a&gt;, you will find the enabled integrations for multiple AWS services.&lt;/li&gt;
&lt;li&gt;Then, search for the API documentation of the integration you are going to use. In this example, SES V2 was used, so we would search at &lt;a href="https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html"&gt;https://docs.aws.amazon.com/ses/latest/APIReference-V2/API_SendEmail.html&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;On the reference page, look for the action you want to perform, in this case, SendEmail.&lt;/li&gt;
&lt;li&gt;At this point, we will see the syntax of the request, details such as the URL, headers, and body of the request. Our focus will be on the body of the request:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST /v2/email/outbound-emails HTTP/1.1
Content-type: application/json

{
   "Content": {
      "Simple": {
      "Body": {
         "Text": {
            "Charset": "UTF-8",
            "Data": "body"
         }
      },
      "Subject": {
         "Charset": "UTF-8",
         "Data": "subject"
      }
      }
   },
   "Destination": {
      "ToAddresses": [
      "email"
      ]
   },
   "FromEmailAddress": "email"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;These same parameters that we see in the body are what we are going to add in the state definition of our state function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, the direct integration of AWS SDK into Step Functions opens a world of possibilities for orchestrating AWS services efficiently and effectively, allowing developers to focus on business logic rather than service integration.&lt;br&gt;
In this repository &lt;a href="https://github.com/jjoc007/poc_step_functions_examples/tree/main/2_send_email_via_ses"&gt;https://github.com/jjoc007/poc_step_functions_examples/tree/main/2_send_email_via_ses&lt;/a&gt;, you will see the example ready to deploy with Terraform, feel free to download it and try it out.&lt;/p&gt;

&lt;p&gt;If you liked this article, don't hesitate to give it a 👏 and a ⭐ on the repository.&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://jjoc007.com/aws-step-functions-enviar-un-correo-desde-un-state-35b790c3314e"&gt;Spanish version&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/aws-step-functions-using-the-wait-state-type-1ab1"&gt;AWS Step Functions: Using the Wait State Type&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il"&gt;Introduction to AWS Step Functions Using Terraform as Infrastructure-as-Code Tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>amazon</category>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Step Functions: Using the Wait State Type</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Fri, 01 Dec 2023 04:58:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-step-functions-using-the-wait-state-type-1ab1</link>
      <guid>https://dev.to/aws-builders/aws-step-functions-using-the-wait-state-type-1ab1</guid>
      <description>&lt;p&gt;Before checking out the post, I recommend first looking at: &lt;a href="https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il"&gt;Introduction to AWS Step Functions using Terraform as an infrastructure as code tool.&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;The "Wait" state in AWS Step Functions is a powerful and flexible tool for managing workflows in the cloud. This state is essentially a pause mechanism, allowing workflows to wait for a specified period of time or until a specific event occurs before proceeding to the next step. It is crucial in scenarios where timing and temporal coordination are important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operation of the "Wait" State:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Fixed Wait&lt;/strong&gt;: You can configure the "Wait" state so that your workflow pauses for a specific amount of time, expressed in seconds or until an exact hour and date. This option is useful for predictable delays, such as waiting between attempts of an operation or to allow time for external processes to complete.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"wait_ten_seconds" : {
  "Type" : "Wait",
  "Seconds" : 10,
  "Next": "NextState"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Dynamic Wait:&lt;/strong&gt; It is also possible to configure the "Wait" state to wait until a certain event occurs, such as the arrival of a message to an SQS queue or the update of a specific resource. In this mode, the workflow resumes immediately after the condition is met, making it ideal for unpredictable events.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example:
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"wait_until" : {
  "Type": "Wait",
  "TimestampPath": "$.expirydate",
  "Next": "NextState"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As we can see in the previous example, the waiting time is determined by an input variable &lt;code&gt;$.expirydate&lt;/code&gt;, which should come from the previous state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Use Cases for the Use of the Wait State:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Approval Processes:&lt;/strong&gt; In workflows that require human approvals, the "Wait" state can pause the process until a response is received.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coordination of Asynchronous Tasks:&lt;/strong&gt; In scenarios where multiple tasks are executed in parallel and one of them must wait for the others to complete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduled Delays:&lt;/strong&gt; Useful in cases where a delay between steps is needed, such as in batch processing or in the staggered sending of notifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example:
&lt;/h2&gt;

&lt;p&gt;We have a workflow that is divided into 3 stages or actions:&lt;/p&gt;

&lt;p&gt;**Create: **creation of the process. The process will have an ID and a state, and the state must advance to the effective state before an action of completion or reversal can be taken. It is important to clarify that the process's effective state does not occur immediately; it can take from 10 to 20 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execute Action:&lt;/strong&gt; At this point, the process must be in an effective state to execute the action. The allowed actions are finish or rollback, depending on the case. After this, the state of the process will be finishing or rollbacking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;End:&lt;/strong&gt; this stage will validate that the process has successfully finished or has been successfully reverted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivaw3sr7avmdvvnheaz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivaw3sr7avmdvvnheaz3.png" alt="AWS Step function definition"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Example states:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create&lt;/strong&gt;: This state calls a lambda function that will initialize the process.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Create": {
      "Type": "Task",
      "Resource": "${aws_lambda_function.number_validator_lambda.arn}",
      "Parameters": {
        "input.$": "$",
        "action": "create"
      },
      "Next": "WaitForExecuteAction"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;WaitForExecuteAction:&lt;/strong&gt; After the process is initiated, a waiting period is required before validating the current state and executing the action (finish / rollback).&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"WaitForExecuteAction": {
      "Type": "Wait",
      "Seconds": 10,
      "Next": "ExecuteAction"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;ExecuteAction&lt;/strong&gt;: After the waiting period, a lambda function must be executed that validates the current state of the process. If it is effective, the action should be executed; otherwise, the flow should fail.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"ExecuteAction": {
      "Type": "Task",
      "Resource": "${aws_lambda_function.number_validator_lambda.arn}",
      "Parameters": {
        "number.$": "$.number",
        "update.$": "$.update",
        "status.$": "$.status",
        "action": "execute_action"
      },
      "Next": "WaitForEnd"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;WaitForEnd&lt;/strong&gt;: After this, an additional waiting period is required for the process to reach a finished or rollbacked state.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"WaitForEnd": {
      "Type": "Wait",
      "Seconds": 10,
      "Next": "EndDeploy"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;EndDeploy: Finally, it must be validated that the process has indeed finished; if not, the flow should fail.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"EndDeploy": {
      "Type": "Task",
      "Resource": "${aws_lambda_function.number_validator_lambda.arn}",
      "Parameters": {
        "number.$": "$.number",
        "update.$": "$.update",
        "status.$": "$.status",
        "action": "end"
      },
      "Next": "Ended"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Ended&lt;/strong&gt;: Final step of the workflow.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Ended": {
      "Type": "Succeed"
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In summary, the "Wait" state in AWS Step Functions is a powerful tool for efficient workflow management. Its ability to handle both fixed and event-based pauses allows developers to build smarter and more responsive applications that can effectively react to changing conditions and business needs.&lt;/p&gt;

&lt;p&gt;Complete example: &lt;a href="https://github.com/jjoc007/poc_step_function_validator/tree/main" rel="noopener noreferrer"&gt;https://github.com/jjoc007/poc_step_function_validator/tree/main&lt;/a&gt;&lt;br&gt;
If you like it, give a 👍 to the post and a ⭐ to the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  References:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il"&gt;introduction to aws step functions&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://jjoc007.com/introducci%C3%B3n-a-aws-step-functions-usando-terraform-como-herramienta-de-infrastructura-como-c%C3%B3digo-e2add2930269" rel="noopener noreferrer"&gt;introduction to aws step functions Spanish version &lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/jjoc007/poc_step_function_validator/tree/main" rel="noopener noreferrer"&gt;Repository&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://states-language.net/spec.html#wait-state" rel="noopener noreferrer"&gt;Aws documentation&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Introduction to AWS Step Functions Using Terraform as Infrastructure-as-Code Tool</title>
      <dc:creator>juan jose orjuela</dc:creator>
      <pubDate>Wed, 01 Nov 2023 17:39:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il</link>
      <guid>https://dev.to/aws-builders/introduction-to-aws-step-functions-using-terraform-as-infrastructure-as-code-tool-33il</guid>
      <description>&lt;h2&gt;
  
  
  Demo in spanish
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/playlist?list=PL2gu2Qe_CGFkswvxaKiY2_hGgj6Que0sj" rel="noopener noreferrer"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F123d0pbt04bs4n34gpjw.png" alt="demo"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Microservices and serverless architectures are booming in the current information technology landscape. However, coordinating and handling errors among these services can be a challenge. This is where AWS Step Functions come into play, as they provide a robust workflow tool to orchestrate microservices components and handle workflows in AWS.&lt;/p&gt;

&lt;p&gt;In this article, we will introduce the concept of AWS Step Functions, explaining their usefulness and advantages in creating cloud-based applications. Then, as a practical case, we will show how to implement and manage AWS Step Functions using Terraform, a very popular infrastructure-as-code tool. The purpose of this article is to provide a clear understanding of AWS Step Functions and their implementation through Terraform, so you can make the most of this service and enhance the efficiency and robustness of your applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are AWS Step Functions
&lt;/h2&gt;

&lt;p&gt;AWS Step Functions is a fully managed workflow orchestration service that makes it easy to coordinate the components of distributed applications. This service allows developers to visually design workflows, or 'step functions,' which coordinate the components of their applications in a specific pattern, such as sequences, branches, and merges.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of AWS Step Functions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  State Management
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions keep track of the state of each workflow, maintaining its activity and data at all stages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Retries and Error Handling
&lt;/h3&gt;

&lt;p&gt;Provides automatic error handling and retries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Visualization
&lt;/h3&gt;

&lt;p&gt;Offers a graphical interface to visualize and modify workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compatibility
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions can interact with almost all other AWS services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Programming
&lt;/h3&gt;

&lt;p&gt;Developers can program the coordination and conditional logic in their application, instead of implementing it in the code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Using AWS Step Functions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Resilience
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions has built-in error handling capabilities, making workflows resilient to failures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;p&gt;As a managed service, AWS Step Functions can scale as needed to execute workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced Coding
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions eliminates the need to write 'glue' code to coordinate microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy to Monitor
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions is integrated with CloudWatch, allowing easy monitoring of workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages of Using AWS Step Functions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cost
&lt;/h3&gt;

&lt;p&gt;While AWS Step Functions can reduce the amount of code you need to write, it is not free. The cost can add up quickly for complicated or high-volume workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Complexity
&lt;/h3&gt;

&lt;p&gt;AWS Step Functions introduces a new level of complexity to the application, as the Step Functions service must now be managed and understood.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time Limitations
&lt;/h3&gt;

&lt;p&gt;Each execution of a step function has a maximum duration of one year. This may not be suitable for some long-term workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vendor Lock-in
&lt;/h3&gt;

&lt;p&gt;By using AWS Step Functions, you are locking yourself into the AWS platform. If you ever need to migrate to another platform in the future, this could be a limiting factor.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Terraform
&lt;/h2&gt;

&lt;p&gt;Terraform is an open-source Infrastructure as Code (IaC) tool created by HashiCorp. It allows developers to define and provide data center infrastructure using a declarative configuration language. This includes servers, storage, and networking across a variety of cloud service providers, including AWS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Using Terraform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cloud-Agnostic Platform
&lt;/h3&gt;

&lt;p&gt;Unlike provider-specific IaC tools like AWS CloudFormation, Terraform is cloud-agnostic, meaning it can work with any cloud service provider, including AWS, Google Cloud, Azure, and others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Declarative Configuration Language
&lt;/h3&gt;

&lt;p&gt;Terraform uses a declarative configuration language, meaning you specify what resources you want to create without having to detail the stages to create them.&lt;/p&gt;

&lt;h3&gt;
  
  
  State Management
&lt;/h3&gt;

&lt;p&gt;Terraform maintains a state of your infrastructure and can determine what has changed since the last execution, allowing for efficient planning of changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages of Using Terraform
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complexity
&lt;/h3&gt;

&lt;p&gt;Although Terraform can be very powerful, it can also be complicated to configure and use correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation
&lt;/h3&gt;

&lt;p&gt;Sometimes, the documentation for Terraform can be lacking or confusing, especially for more complex use cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Speed
&lt;/h3&gt;

&lt;p&gt;Terraform may be slower to support new features of cloud services compared to provider-specific IaC tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Example in Terraform "Hello World"
&lt;/h2&gt;

&lt;p&gt;Here is a basic example of what a Terraform script would look like to create a single EC2 server in AWS. This is equivalent to a 'Hello World' in Terraform:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-west-2"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"example"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;ami&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ami-0c55b159cbfafe1f0"&lt;/span&gt;
    &lt;span class="nx"&gt;instance_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t2.micro"&lt;/span&gt;

    &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example-instance"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is a very simple script. Here's what's happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;provider "aws"&lt;/code&gt;: This specifies that we are going to use AWS as our provider for our resource. The region is specified within this block.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;resource "aws_instance" "example"&lt;/code&gt;: This defines a resource, in this case, an EC2 instance. "example" is an arbitrary name we give to this resource.&lt;/li&gt;
&lt;li&gt;Inside the resource block, we specify the Amazon Machine Image (AMI) ID we want to use for our instance and the instance type. In this case, we are using an AMI for a basic Ubuntu instance and t2.micro for the instance type, which is the lowest cost option.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;tags&lt;/code&gt; block allows adding labels to the instance, in this case, simply giving the instance the name "example-instance".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To run this script, you must have Terraform installed and AWS credentials configured in your environment. Then you can initialize Terraform with &lt;code&gt;terraform init&lt;/code&gt;, plan the execution with &lt;code&gt;terraform plan&lt;/code&gt;, and finally apply the changes with &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture of the Example to Follow
&lt;/h2&gt;

&lt;p&gt;In this section, we will look at how we can implement a simple step flow that can be implemented with the AWS Step Functions service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl366dn802f8pd6dasxlq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl366dn802f8pd6dasxlq.jpg" alt="General Architecture Example AWS Step Functions"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;General Architecture Example AWS Step Functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The flow starts with the execution of a lambda that creates a random number between 1 and 100. This number will be validated by the flow; if the number is even, the lambda "Even" will be executed, and if it is odd, the lambda "Odd" will be executed. After this, we end the flow.&lt;/p&gt;

&lt;p&gt;To make the example functional, we must carry out the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create the Logic of the Lambda “Number Generator”&lt;/strong&gt;: This lambda will simply generate a random number between 1 and 100 and then return that generated number as a response. This lambda will be developed in Node 14.x.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create the Logic of the Lambda “Even”&lt;/strong&gt;: This lambda will receive the previously generated number as an input parameter and will print a message in the logs specifying that the number is even. Like the previous lambda, the logic will be developed in Node 14.x.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create the Logic of the Lambda “Odd”&lt;/strong&gt;: This lambda will receive the previously generated number as an input parameter and will print a message in the logs specifying that the number is odd.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create the Base Code of the Terraform Project&lt;/strong&gt;: We are going to create an initial Terraform project by importing the AWS provider.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate a Packaging Mechanism for Each of the Lambdas&lt;/strong&gt;: In order to upload this logic as a lambda function to AWS, it is necessary that they be packaged as .zip files. We will do this through Terraform.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Infrastructure for the 3 Lambdas through Terraform&lt;/strong&gt;: In the code base created in Terraform, we will add the infrastructure for the 3 lambdas mentioned above and associate the logic code of each of them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Infrastructure for the Step Function That We Will Use in the Terraform Project&lt;/strong&gt;: In the code base created in Terraform, we will implement the necessary infrastructure resources to create the step function considering the flow outlined above.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Infrastructure Components That Allow the Step Function to Execute the Lambdas&lt;/strong&gt;: It is necessary to add a role to the step function so that it can execute the lambdas specified above.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test the Created Step Function!&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation of the Example
&lt;/h2&gt;

&lt;p&gt;Now that we have explained the example that we are going to carry out with AWS Step Functions, let's implement each step.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Logic of the Lambda "Number Generator"
&lt;/h3&gt;

&lt;p&gt;For the lambda we're creating, we will make a folder where we will store its logic. In this folder, we will have the files &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;main.js&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kgnronbjm8v6z5psxpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0kgnronbjm8v6z5psxpy.png" alt="Initial Folder Structure for Lambdas"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Initial Folder Structure for Lambdas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;package.json&lt;/code&gt; file will contain the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"number-generator"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Number generator lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"devDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"aws-sdk"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^2.1045.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The main.js file will have the logic to generate a random number and return it, along with the validation of whether the number is even or not:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;random&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Generated number is: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;is_even&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Create the Logic of the Lambda Even
&lt;/h3&gt;

&lt;p&gt;For the logic of the lambda even, we will do the same as for the previous lambda, having a &lt;code&gt;package.json&lt;/code&gt; and &lt;code&gt;main.js&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;package.json&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"even number"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Even number lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"devDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"aws-sdk"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^2.1045.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;main.js&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`My even number is: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Create the Logic of the Lambda Odd
&lt;/h3&gt;

&lt;p&gt;Lastly, the files for the lambda Odd:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;package.json&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"odd number"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Odd number lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"main"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"main.js"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"dependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"devDependencies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"aws-sdk"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"^2.1045.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;code&gt;main.js&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;

&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`My odd number is: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For creating the logic of the 3 lambdas, it's important to consider that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;aws-sdk&lt;/code&gt; dependency was added; for this example, it won't be used but is included to illustrate how we can import AWS's own libraries to interact with the services.&lt;/li&gt;
&lt;li&gt;It's crucial that in each lambda folder we run the command &lt;code&gt;npm i&lt;/code&gt;, so that the dependencies we're adding to each of the lambdas can be installed.&lt;/li&gt;
&lt;li&gt;The final folder structure should look similar to this:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3skaphoc4z3i7s146jva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3skaphoc4z3i7s146jva.png" alt="Final folder structure for the lambda logic"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Final folder structure for the lambda logic&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Base Code for the Terraform Project
&lt;/h3&gt;

&lt;p&gt;Similar to the lambda logic, we need a folder to store our entire Terraform project that will help us create the infrastructure components necessary for the example to function. Initially, we'll have a folder named &lt;code&gt;terraform&lt;/code&gt; and a file called &lt;code&gt;main.tf&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F943v6pda52q7gq8ejin2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F943v6pda52q7gq8ejin2.png" alt="Initial Folder Structure for the Terraform Project"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure: Initial Folder Structure for the Terraform Project&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;main.tf&lt;/code&gt; file, we define the providers we will use to create our infrastructure components. A provider in Terraform is a collection of resources we can leverage to configure and manage components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;aws&lt;/strong&gt;: This provider supplies all the infrastructure components we can utilize from Amazon Web Services (AWS).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;archive&lt;/strong&gt;: This provider provides utilities for file handling. We will use it to generate the final files for each of the Lambda functions we've been creating.&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 3.0"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"archive"&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, we should initialize the project. For this, we execute the command &lt;code&gt;terraform init&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk7j5te2nw6a8bl73rum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbk7j5te2nw6a8bl73rum.png" alt="Terraform Project Initialized"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This way, we have our Terraform project's base ready for starting the creation of infrastructure components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generating a Packaging Mechanism for Each Lambda
&lt;/h3&gt;

&lt;p&gt;The logic for each lambda must be packaged into a &lt;code&gt;.zip&lt;/code&gt; file to upload it to AWS. For this, we'll use the &lt;code&gt;archive&lt;/code&gt; provider.&lt;/p&gt;

&lt;p&gt;For this, we'll create a new file in the Terraform project called &lt;code&gt;lambda_resources.tf&lt;/code&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"archive_file"&lt;/span&gt; &lt;span class="s2"&gt;"number_generator"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"zip"&lt;/span&gt;
    &lt;span class="nx"&gt;source_dir&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/1_number_generator/"&lt;/span&gt;
    &lt;span class="nx"&gt;output_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/dist/1_number_generator.zip"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"archive_file"&lt;/span&gt; &lt;span class="s2"&gt;"even"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"zip"&lt;/span&gt;
    &lt;span class="nx"&gt;source_dir&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/2_even/"&lt;/span&gt;
    &lt;span class="nx"&gt;output_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/dist/2_even.zip"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"archive_file"&lt;/span&gt; &lt;span class="s2"&gt;"odd"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"zip"&lt;/span&gt;
    &lt;span class="nx"&gt;source_dir&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/3_odd/"&lt;/span&gt;
    &lt;span class="nx"&gt;output_path&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../lambdas/dist/3_odd.zip"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Each code block uses the &lt;code&gt;archive_file&lt;/code&gt; provider to create a zip file from a source directory. Let's break down each line:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;data archive_file number_generator&lt;/strong&gt;: This line defines a new &lt;code&gt;archive_file&lt;/code&gt; resource called number_generator. The word data indicates that we are obtaining data from an existing resource, rather than creating a new one.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;type = zip&lt;/strong&gt;: This line specifies the type of output file we want. In this case, we are creating a zip file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;source_dir = ../lambdas/1_number_generator/&lt;/strong&gt;: This line specifies the source directory we want to compress. All files and subdirectories within ../lambdas/1_number_generator/ will be included in the zip file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;output_path = ../lambdas/dist/1_number_generator.zip&lt;/strong&gt;: This line specifies the path and name of the output file. The resulting zip file will be saved in ../lambdas/dist/1_number_generator.zip.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This type of operation is quite common in serverless applications like AWS Lambda, where you need to upload your function code in zip format to AWS. Therefore, this code block helps you prepare your Lambda function for deployment, packaging the function code into a zip file that can then be uploaded to AWS.&lt;/p&gt;

&lt;p&gt;To see the result of this, we run &lt;code&gt;terraform apply&lt;/code&gt;. After this, we will see that the packaged lambdas have already been created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbguiib31t0lrkso253vi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbguiib31t0lrkso253vi.png" alt="Packaged Lambdas"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Figure: Packaged Lambdas&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Infrastructure for the 3 Lambdas Using Terraform
&lt;/h3&gt;

&lt;p&gt;Now that we have the logic of the lambdas generated and ready to upload to AWS, we need to create the infrastructure in the terraform project. For this, we need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an IAM role to be used by the lambdas, which is necessary to associate permissions with each of them.&lt;/li&gt;
&lt;li&gt;Create the infrastructure resources for each lambda and link them to the previously created role.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  IAM Role for the Lambdas
&lt;/h4&gt;

&lt;p&gt;For this step, we will create a new file called &lt;code&gt;iam.tf&lt;/code&gt; with the following content:&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"example_lambda_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"example_lambda_role_for_numbers"&lt;/span&gt;
    &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
    {
    "Version": "2012-10-17",
    "Statement": [
        {
        "Action": "sts:AssumeRole",
        "Principal": {
            "Service": "lambda.amazonaws.com"
        },
        "Effect": "Allow",
        "Sid": ""
        }
    ]
}
&lt;/span&gt;&lt;span class="no"&gt;    EOF
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this code segment, we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resource aws_iam_role" "example_lambda_role&lt;/strong&gt;: This line is declaring a Terraform resource of type aws_iam_role (an IAM role in AWS) with the local name example_lambda_role. This local name is what you would use within your Terraform configuration to refer to this resource.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;name = example_lambda_role_for_numbers&lt;/strong&gt;: This is the name the IAM role will have in AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;assume_role_policy = EOF&lt;/strong&gt;: This is a policy document that defines which entities are allowed to assume the role. In this case, AWS's Lambda service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The policy itself is a JSON document containing a list of statements, and each statement defines a rule. In this case, there's a single statement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Action: sts:AssumeRole&lt;/strong&gt;: This action allows entities to assume the role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Principal: Service: lambda.amazonaws.com&lt;/strong&gt;: This is the entity that is allowed to assume the role. In this case, it's the AWS Lambda service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Effect: Allow&lt;/strong&gt;: This is the decision of the policy. In this case, it's allowing (Allow) the action.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sid&lt;/strong&gt;: This is the Statement ID of the policy. In this case, it's empty, but you can use it to give a unique identifier to each statement.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This code is essentially creating an IAM role that allows AWS Lambda functions to assume this role. This is a common pattern in AWS when you want to allow your Lambda functions to interact with other AWS services.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lambda Infrastructure Resources
&lt;/h4&gt;

&lt;p&gt;For this step, we will create a file called &lt;code&gt;lambda.tf&lt;/code&gt; which will contain the following content:&lt;/p&gt;

&lt;blockquote&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lambda_function"&lt;/span&gt; &lt;span class="s2"&gt;"number_generator_lambda"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;filename&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number_generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_path&lt;/span&gt;
    &lt;span class="nx"&gt;source_code_hash&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;number_generator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_base64sha256&lt;/span&gt;
    &lt;span class="nx"&gt;function_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"poc_number_generator_lambda"&lt;/span&gt;
    &lt;span class="nx"&gt;role&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example_lambda_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
    &lt;span class="nx"&gt;handler&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1_number_generator/main.handler"&lt;/span&gt;
    &lt;span class="nx"&gt;runtime&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nodejs18.x"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lambda_function"&lt;/span&gt; &lt;span class="s2"&gt;"even_lambda"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;filename&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;even&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_path&lt;/span&gt;
    &lt;span class="nx"&gt;source_code_hash&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;even&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_base64sha256&lt;/span&gt;
    &lt;span class="nx"&gt;function_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"poc_even_lambda"&lt;/span&gt;
    &lt;span class="nx"&gt;role&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example_lambda_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
    &lt;span class="nx"&gt;handler&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2_even/main.handler"&lt;/span&gt;
    &lt;span class="nx"&gt;runtime&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nodejs18.x"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lambda_function"&lt;/span&gt; &lt;span class="s2"&gt;"odd_lambda"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;filename&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;odd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_path&lt;/span&gt;
    &lt;span class="nx"&gt;source_code_hash&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;archive_file&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;odd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output_base64sha256&lt;/span&gt;
    &lt;span class="nx"&gt;function_name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"poc_odd_lambda"&lt;/span&gt;
    &lt;span class="nx"&gt;role&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;example_lambda_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
    &lt;span class="nx"&gt;handler&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3_odd/main.handler"&lt;/span&gt;
    &lt;span class="nx"&gt;runtime&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nodejs18.x"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;p&gt;This code snippet is an example of how to use Terraform to create an AWS Lambda function. AWS Lambda is a service that allows you to run your code without provisioning or managing servers. You just upload your code (known as a Lambda function), and Lambda takes care of the rest.&lt;/p&gt;

&lt;p&gt;We will analyze each line of the code to better understand what it is doing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resource aws_lambda_function odd_lambda&lt;/strong&gt;: This line declares a Terraform resource of type aws_lambda_function with the local name odd_lambda. Terraform uses this local name to refer to this resource in other parts of the configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;filename = data.archive_file.odd.output_path&lt;/strong&gt;: This line specifies the path of the zip file containing the code for the Lambda function. In this case, a data archive file resource is used to generate this zip file. data.archive_file.odd.output_path refers to the output path of the generated file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;source_code_hash = data.archive_file.odd.output_base64sha256&lt;/strong&gt;: This is a hash of the Lambda function's source code. Terraform uses this hash to determine if the source code has changed and if it needs to redeploy the Lambda function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;function_name = poc_odd_lambda&lt;/strong&gt;: This is the name that the Lambda function will have in AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;role = aws_iam_role.example_lambda_role.arn&lt;/strong&gt;: This is the ARN (Amazon Resource Name) of the IAM role that the Lambda function will assume when executed. In this case, it refers to the IAM role created in the previous example.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;handler = 3_odd/main.handler&lt;/strong&gt;: This is the handler of the Lambda function, which is the function in your code that Lambda calls when the function is executed. The format is file.function. In this case, 3_odd/main.handler means that Lambda will call the handler function in the main file of the odd directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;runtime = nodejs18.x&lt;/strong&gt;: This is the runtime environment in which the Lambda function will execute. In this case, the function will run in a Node.js 18.x environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, this Terraform code is creating a Lambda function that will execute code located at the specified path in a Node.js 18.x environment, will assume a specific IAM role when executed, and will have a specific name in AWS.&lt;/p&gt;

&lt;p&gt;To test this, we will run the command &lt;code&gt;terraform apply&lt;/code&gt; and then confirm with &lt;code&gt;yes&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j7ehiz3zkdgmewwb7nv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7j7ehiz3zkdgmewwb7nv.png" alt="Resources created"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Caption: Resources created&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Label: fig:architecture_aws_step_function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, if we go to the AWS console, we will see the Lambdas already created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzjs8dun5atiak4ksr9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbzjs8dun5atiak4ksr9n.png" alt="Resources Created"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Caption: Resources created&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Label: fig:architecture_aws_step_function&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating Infrastructure Components to Enable Step Function to Execute Lambdas
&lt;/h3&gt;

&lt;p&gt;To allow the previously created Lambdas to be executed by a Step Function, it is necessary to create a role that the Step Function can assume to execute the Lambdas. For this, we will create the following resources inside the iam.tf file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resource aws_iam_role step_functions_role&lt;/strong&gt;: Role that the Step Function will assume.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;data aws_iam_policy_document lambda_access_policy&lt;/strong&gt;: IAM Policy that will be associated with the Step Function's role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resource aws_iam_policy step_functions_policy_lambda&lt;/strong&gt;: IAM Policy resource to associate with the Step Function's role.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resource aws_iam_role_policy_attachment step_functions_to_lambda&lt;/strong&gt;: Explicit association of the IAM Policy with the Role.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  resource aws_iam_role step_functions_role
&lt;/h4&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"step_functions_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"step_functions_role_poc_sf"&lt;/span&gt;

    &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
        &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;
            &lt;span class="nx"&gt;Effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
            &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;Service&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"states.amazonaws.com"&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  data aws_iam_policy_document lambda_access_policy
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_policy_document"&lt;/span&gt; &lt;span class="s2"&gt;"lambda_access_policy"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;actions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="s2"&gt;"lambda:*"&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  resource aws_iam_policy step_functions_policy_lambda
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_iam_policy" "step_functions_policy_lambda" {
    name   = "step_functions_policy_lambda_policy_all_poc_sf"
    policy = data.aws_iam_policy_document.lambda_access_policy.json
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  resource aws_iam_role_policy_attachment step_functions_to_lambda
&lt;/h4&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_iam_role_policy_attachment" "step_functions_to_lambda" {
    role       = aws_iam_role.step_functions_role.name
    policy_arn = aws_iam_policy.step_functions_policy_lambda.arn
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Creating the Step Function Infrastructure for Our Terraform Project
&lt;/h3&gt;

&lt;p&gt;In this section, we will create the infrastructure components that will allow us to set up a Step Function interacting with the previously created Lambdas.&lt;/p&gt;

&lt;p&gt;For this, we will create a file named &lt;code&gt;step_function.tf&lt;/code&gt; in our Terraform project with the following content:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# Step Functions State Machine&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_sfn_state_machine"&lt;/span&gt; &lt;span class="s2"&gt;"number_processor_sf"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"NumberProcessorSF"&lt;/span&gt;
    &lt;span class="nx"&gt;role_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;step_functions_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;

    &lt;span class="nx"&gt;definition&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
{
    "Comment": "execute lambdas",
    "StartAt": "NumberGenerator",
    "States": {
    "NumberGenerator": {
        "Type": "Task",
        "Resource": "${aws_lambda_function.number_generator_lambda.arn}",
        "Next": "IsNumberEven"
    },
    "IsNumberEven": {
        "Type": "Choice",
        "Choices": [
        {
            "Variable": "$.Payload.is_even",
            "BooleanEquals": true,
            "Next": "Even"
        }
        ],
        "Default": "Odd"
    },
    "Even": {
        "Type": "Task",
        "Resource": "${aws_lambda_function.even_lambda.arn}",
        "End": true
    },
    "Odd": {
        "Type": "Task",
        "Resource": "${aws_lambda_function.odd_lambda.arn}",
        "End": true
    }
    }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To apply changes to the cloud just like in previous steps, simply run a &lt;code&gt;terraform apply&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the Created Step Function!
&lt;/h3&gt;

&lt;p&gt;In the Amazon console, we should navigate to the Step Functions service and observe that there is a step function named: NumberProcessorSF.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uwe3re1hk7xluqe8gni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uwe3re1hk7xluqe8gni.png" alt="Step Function Details"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Detail of the Created Step Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In the definition section, we can see a diagram with the defined flow, which is similar to the architecture defined at the beginning of the exercise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqmg4dg1i5j32grr45q2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqmg4dg1i5j32grr45q2.png" alt="Definition of the Created Step Function"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Definition of the Created Step Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, to test it, we go to the Executions tab and click the Start Execution button:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8sc321unlyn51iaejnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8sc321unlyn51iaejnp.png" alt="Execute the Created Step Function"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Execute the Created Step Function&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In this view, we can see the flow of our execution and each of the steps taken to complete the execution. Additionally, if we click on any of the steps, we can see logs, inputs, and outputs of what is happening, thus facilitating the traceability of each execution.&lt;/p&gt;

&lt;p&gt;This is what a successful execution looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftawsbxaazja4rw3pp3ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftawsbxaazja4rw3pp3ji.png" alt="Successful Execution"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Successful Execution&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Informative Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Spanish version: &lt;a href="https://jjoc007.com/introducci%C3%B3n-a-aws-step-functions-usando-terraform-como-herramienta-de-infrastructura-como-c%C3%B3digo-e2add2930269" rel="noopener noreferrer"&gt;https://jjoc007.com/introducci%C3%B3n-a-aws-step-functions-usando-terraform-como-herramienta-de-infrastructura-como-c%C3%B3digo-e2add2930269&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Example on GitHub: &lt;a href="https://github.com/jjoc007/poc-step-function" rel="noopener noreferrer"&gt;https://github.com/jjoc007/poc-step-function&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AWS Step Functions: &lt;a href="https://aws.amazon.com/es/step-functions/" rel="noopener noreferrer"&gt;https://aws.amazon.com/es/step-functions/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Terraform: &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;https://www.terraform.io/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>node</category>
    </item>
  </channel>
</rss>
