<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Carlos José Castro Galante</title>
    <description>The latest articles on DEV Community by Carlos José Castro Galante (@carlosjcastrog).</description>
    <link>https://dev.to/carlosjcastrog</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/carlosjcastrog"/>
    <language>en</language>
    <item>
      <title>GitHub Copilot in 2026 is not what you think it is anymore</title>
      <dc:creator>Carlos José Castro Galante</dc:creator>
      <pubDate>Sat, 18 Apr 2026 01:31:28 +0000</pubDate>
      <link>https://dev.to/carlosjcastrog/github-copilot-in-2026-is-not-what-you-think-it-is-anymore-ij3</link>
      <guid>https://dev.to/carlosjcastrog/github-copilot-in-2026-is-not-what-you-think-it-is-anymore-ij3</guid>
      <description>&lt;p&gt;If you still think of GitHub Copilot as "the thing that autocompletes your code," you're about two years behind. That's not a criticism - the product has changed faster than most people's mental models of it. This post is an attempt to give you an accurate picture of what Copilot actually is right now, what the research says about its impact, and where the real limits are.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does under the hood
&lt;/h2&gt;

&lt;p&gt;Every time Copilot generates a suggestion, it builds a prompt from whatever context it can gather: the code around your cursor, other open tabs, your repo's URL, any custom instruction files you've set up, and - if you've configured them - indexed repository content or attached data from MCP servers. That prompt goes over TLS to GitHub's Copilot proxy, which handles authentication, content filtering, public-code-match checks, and rate limiting. Then it routes to whatever model you've selected.&lt;/p&gt;

&lt;p&gt;Inline completions use a Fill-in-the-Middle (FIM) approach, meaning the model sees both the code before and after the cursor rather than just a prefix. GitHub ran A/B tests on this and found it lifted accepted completions by around 10%. In 2024 they also swapped out the original completion backend for a custom-trained model that reduced latency by 35%, delivered 12% higher acceptance rates, and tripled throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  The feature surface in 2026
&lt;/h2&gt;

&lt;p&gt;Copilot has expanded from one feature (inline completions) to something that looks more like a platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Edit Suggestions&lt;/strong&gt; - available since April 2025 in VS Code, Xcode, and Eclipse - predicts where in the file you're going to edit next, not just what comes after the cursor. It's a subtle difference but it changes how you move through a codebase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copilot Edits / multi-file edit mode&lt;/strong&gt; reached GA in February 2025. It uses a dual-model architecture: one model proposes the changes, a speculative-decoding endpoint applies them fast. You describe what you want at the level of a task, and it touches as many files as needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agent mode&lt;/strong&gt; is what changed the product's identity. It's available in VS Code, Visual Studio, JetBrains, Eclipse, and Xcode. In agent mode, Copilot picks the files to touch, proposes terminal commands, runs them, reads the output, and iterates. It keeps going until the task is done or it gets stuck. When GitHub announced it with Claude 3.7 Sonnet in April 2025, it posted a 56% pass rate on SWE-bench Verified.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;cloud agent&lt;/strong&gt; (launched GA in September 2025) is the async version. You assign a GitHub issue to Copilot from the web or CLI, and it runs inside a sandboxed GitHub Actions environment, pushes commits to a draft PR, runs your tests, and requests your review when done. You don't have to be at your desk.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Copilot CLI&lt;/strong&gt; reached GA in February 2026. It's a separate install (npm, Homebrew, or WinGet) that brings a Plan mode, a fully autonomous Autopilot mode, parallel specialized sub-agents (Explore, Task, Code Review, Plan), repository memory across sessions, hooks, plugins, and a built-in GitHub MCP server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copilot code review&lt;/strong&gt; reached GA in April 2025 and was rearchitected at GitHub Universe 2025 to combine LLM reasoning with deterministic engines like ESLint and CodeQL. In December 2025 it was extended so that PRs from unlicensed contributors in an org can still be reviewed, billed to the org.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Copilot Spaces&lt;/strong&gt; (GA September 2025) are curated bundles of files, issues, PRs, and docs that act as grounding context for any Copilot surface.&lt;/p&gt;

&lt;p&gt;On customization: you can set a &lt;code&gt;.github/copilot-instructions.md&lt;/code&gt; at the repo level, personal or org-level instructions, and since Universe 2025 an &lt;code&gt;AGENTS.md&lt;/code&gt; file that defines custom agents with their own tool sets and behavior per project. MCP has become the primary extension mechanism - servers get invoked automatically based on intent rather than requiring explicit calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where it runs
&lt;/h2&gt;

&lt;p&gt;Inline completions are supported in VS Code, Visual Studio, JetBrains IDEs, Eclipse, Xcode, Vim/Neovim, and Azure Data Studio. Chat runs in VS Code, Visual Studio, JetBrains, Eclipse, Xcode, GitHub.com, GitHub Mobile, Windows Terminal, and Raycast. Agent mode is in VS Code, Visual Studio, JetBrains, Eclipse, and Xcode. Vim/Neovim gets completions only - no chat. The CLI is cross-platform but not available on the Free tier.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plans and pricing, without the marketing
&lt;/h2&gt;

&lt;p&gt;There are five tiers. The Free plan launched in December 2024 with 2,000 completions and 50 premium requests per month. The Student plan is free for verified GitHub Education users and gives unlimited completions with 300 premium requests. Pro is $10/month (or $100/year) with unlimited completions and 300 premium requests - it includes agent mode, the cloud agent, the CLI, and MCP. Pro+ is $39/month with 1,500 premium requests and access to every available model, including preview models as they ship. Business is $19/user/month (300 premium requests per user, admin controls, audit logs, IP indemnity). Enterprise is $39/user/month (1,000 premium requests per user, organization-codebase indexing, a fine-tuned private completion model, and Bing-grounded web search in GitHub.com chat).&lt;/p&gt;

&lt;p&gt;Overages on every paid plan cost $0.04 per additional premium request. Base-model usage - inline completions, chat with the included models - doesn't count against the premium budget.&lt;/p&gt;

&lt;p&gt;One practical note: &lt;strong&gt;Copilot is not available on GitHub Enterprise Server&lt;/strong&gt;, only on GitHub Enterprise Cloud. That surprises a lot of enterprise architects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the productivity data actually says
&lt;/h2&gt;

&lt;p&gt;I want to be careful here because the numbers that circulate online are often decontextualized.&lt;/p&gt;

&lt;p&gt;The most cited study is Peng et al. (2022, arXiv:2302.06590). Ninety-five developers on Upwork were randomly split and asked to implement an HTTP server in JavaScript. The Copilot group finished in 1 hour 11 minutes on average; the control group took 2 hours 41 minutes. That's a 55.8% speedup, statistically significant (P=0.0017). Less experienced developers, older developers, and those with higher baseline workloads benefited most. The task was narrow and the sample was controlled, so this number describes one context, not all development work.&lt;/p&gt;

&lt;p&gt;The GitHub × Accenture randomized controlled trial is the strongest enterprise evidence. Across roughly 450 Accenture developers, Copilot produced an 8.69% increase in pull requests per developer, a 15% increase in PR merge rate, and an 84% increase in successful builds. About 30% of Copilot suggestions were accepted, and 88% of accepted characters were retained. Accenture has since rolled Copilot out to more than 12,000 developers.&lt;/p&gt;

&lt;p&gt;A ZoomInfo field study (arXiv:2501.13282) covering 400+ developers through a four-phase rollout found a 33% full-suggestion acceptance rate, a 20% line-of-code acceptance rate, and 72% developer satisfaction.&lt;/p&gt;

&lt;p&gt;The numbers I'd avoid citing without sourcing are the "46% of code written by Copilot" and "15 million users" figures - they come from press announcements rather than controlled studies. The Forrester ROI figures are real but behind a paywall; if you want to cite them, get the original study.&lt;/p&gt;

&lt;h2&gt;
  
  
  The direction things are going
&lt;/h2&gt;

&lt;p&gt;At GitHub Universe 2025, GitHub announced &lt;strong&gt;Agent HQ&lt;/strong&gt;, a control plane that orchestrates agents from Anthropic, OpenAI, Google, Cognition, and xAI across GitHub, VS Code, CLI, and Mobile under a single Copilot subscription. The framing was explicit: Copilot is positioning itself as the interface for all coding agents, not just the home for GitHub's own.&lt;/p&gt;

&lt;p&gt;The economic model is also shifting. Every paid tier includes unlimited use of a base model with a monthly premium-request budget for frontier calls. As frontier models become cheaper, more of them will probably move into the base tier. For now, the budget disciplines how much you use the most powerful models per month.&lt;/p&gt;

&lt;p&gt;If there's one sentence that captures where Copilot is in 2026: it's not a product anymore, it's an orchestration layer. Completions, chat, edits, in-IDE agents, the CLI, and the cloud agent are points on a continuum from "suggest what comes next" to "go do this task and tell me when you're done." The underlying model changes constantly. What stays stable is the interface - and increasingly, the agents you define yourself.&lt;/p&gt;




&lt;p&gt;If you want to dive deeper into Copilot's learning resources, Microsoft Learn has a full set of modules and learning paths covering everything from setup to agent mode and responsible AI:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://learn.microsoft.com/copilot?wt.mc_id=studentamb_510659" rel="noopener noreferrer"&gt;https://learn.microsoft.com/copilot?wt.mc_id=studentamb_510659&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Carlos José Castro Galante is a Full Stack Developer and Azure AI Engineer certified by Microsoft (AI-102, AI-900, AZ-900) and ITBA. Available for freelance projects from Argentina.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Cómo preparé y aprobé la certificación Azure AI Engineer Associate (AI-102)</title>
      <dc:creator>Carlos José Castro Galante</dc:creator>
      <pubDate>Fri, 10 Apr 2026 22:21:55 +0000</pubDate>
      <link>https://dev.to/carlosjcastrog/como-prepare-y-aprobe-la-certificacion-azure-ai-engineer-associate-ai-102-ba5</link>
      <guid>https://dev.to/carlosjcastrog/como-prepare-y-aprobe-la-certificacion-azure-ai-engineer-associate-ai-102-ba5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Por Carlos José Castro Galante - Full Stack Developer &amp;amp; Azure AI Engineer | Argentina&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Obtener la certificación &lt;strong&gt;Microsoft Certified: Azure AI Engineer Associate (AI-102)&lt;/strong&gt; fue uno de los pasos más importantes en mi carrera como desarrollador. En este artículo te cuento exactamente cómo lo hice: desde cómo conseguí el voucher hasta el día del examen, con total honestidad sobre lo que funcionó y lo que me costó más de lo esperado. Y al final te cuento algo que cambia completamente el panorama para quien esté pensando en certificarse hoy.&lt;/p&gt;




&lt;h2&gt;
  
  
  Un poco de contexto
&lt;/h2&gt;

&lt;p&gt;Antes de rendir el AI-102, ya tenía la &lt;strong&gt;AZ-900 (Azure Fundamentals)&lt;/strong&gt;, y eso marcó una diferencia enorme. Mi recomendación siempre va a ser la misma: si vas a certificarte en Azure, empezá por la AZ-900. No es un requisito formal, pero sí es la base conceptual de toda la plataforma. Sin ella, muchos conceptos del AI-102 se vuelven confusos innecesariamente, y terminás aprendiendo dos cosas a la vez en lugar de una.&lt;/p&gt;

&lt;p&gt;La AZ-900 me dio el mapa general de Azure: qué es una suscripción, cómo funciona el portal, qué son los recursos y los grupos de recursos, cómo se organiza la plataforma en términos generales. Con esa base, cuando llegué al AI-102 pude enfocarme en profundizar en inteligencia artificial en lugar de estar aprendiendo Azure desde cero al mismo tiempo.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cómo conseguí el voucher
&lt;/h2&gt;

&lt;p&gt;El voucher para el examen lo obtuve a través del &lt;strong&gt;Bootcamp de Azure AI-102 de Código Facilito&lt;/strong&gt;. Para quienes no lo conocen, Código Facilito es una plataforma latinoamericana de educación tecnológica que organiza bootcamps enfocados en preparación para certificaciones Microsoft. El programa incluye clases en vivo, material de estudio y, al completarlo, el voucher para rendir el examen oficial.&lt;/p&gt;

&lt;p&gt;Esto es relevante porque el examen de Microsoft tiene un costo en dólares que puede ser significativo desde Argentina. Poder acceder al voucher a través de un bootcamp es una vía muy concreta para reducir esa barrera de entrada, y es algo que genuinamente agradezco haber encontrado.&lt;/p&gt;




&lt;h2&gt;
  
  
  Cuánto tiempo estudié y cómo lo organicé
&lt;/h2&gt;

&lt;p&gt;Me dediqué &lt;strong&gt;dos meses&lt;/strong&gt; de preparación antes de rendir. Fui bastante sistemático con el material, aunque no seguí un horario rígido, sino que avancé a mi ritmo tratando de entender bien cada tema antes de pasar al siguiente.&lt;/p&gt;

&lt;p&gt;El material de &lt;strong&gt;Código Facilito&lt;/strong&gt; fue la columna vertebral del estudio. Las clases del bootcamp cubren el temario del examen de forma estructurada y orientada a lo que Microsoft realmente evalúa, lo cual ahorra mucho tiempo de no saber por dónde empezar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Learn&lt;/strong&gt; es indispensable y completamente gratuito. Tiene rutas de aprendizaje específicas para el AI-102 con módulos interactivos, ejercicios en sandbox y preguntas de práctica. Si tuviera que elegir solo un recurso, sería este.&lt;/p&gt;

&lt;p&gt;También usé &lt;strong&gt;videos en YouTube&lt;/strong&gt; para reforzar conceptos específicos, especialmente para ver demostraciones prácticas de servicios como Azure OpenAI, Azure Cognitive Search o Language Studio. Ver los servicios en acción muchas veces explica en cinco minutos lo que un documento tarda páginas en describir.&lt;/p&gt;

&lt;p&gt;Y acá quiero hacer una aclaración importante: &lt;strong&gt;practicar vale más que solo leer&lt;/strong&gt;. El AI-102 no es un examen que se aprueba memorizando definiciones. Las preguntas son de escenarios reales, y Microsoft tiene un banco de preguntas enormemente amplio y muy aleatorio, lo que significa que nunca sabés exactamente qué combinación de preguntas te va a tocar. La única forma de estar preparado para eso es haber practicado lo suficiente como para entender los conceptos de verdad, no solo haberlos leído. Entrar al portal de Azure, crear recursos, probar los servicios, romper cosas y entender por qué fallaron, eso es lo que realmente te prepara.&lt;/p&gt;




&lt;h2&gt;
  
  
  El temario del AI-102: qué cubre el examen
&lt;/h2&gt;

&lt;p&gt;El examen abarca un abanico amplio de servicios de IA en Azure. Los grandes bloques son los siguientes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Soluciones de visión por computadora&lt;/strong&gt; con Azure Computer Vision, Azure Custom Vision y Face API. Conceptos como análisis de imágenes, detección de objetos, OCR y reconocimiento facial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Procesamiento de lenguaje natural (NLP)&lt;/strong&gt; con Azure AI Language, análisis de sentimientos, extracción de entidades, clasificación de texto y traducción con Azure Translator.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minería de conocimiento&lt;/strong&gt; con Azure Cognitive Search, indexación de documentos, enriquecimiento con IA y skillsets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inteligencia artificial conversacional&lt;/strong&gt; con Azure Bot Service y QnA Maker, ahora integrado en Language Studio, junto con el diseño de flujos conversacionales.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure OpenAI&lt;/strong&gt;, que incluye modelos de lenguaje grande, integración de GPT en aplicaciones, prompts y embeddings. Este bloque fue incorporado más recientemente al examen y tiene un peso cada vez más importante.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Responsabilidad en IA&lt;/strong&gt;, porque Microsoft no solo evalúa lo técnico. Hay preguntas conceptuales sobre principios éticos, fairness, transparencia y privacidad en sistemas de IA. Es un bloque que no hay que subestimar.&lt;/p&gt;




&lt;h2&gt;
  
  
  Los temas que más me costaron
&lt;/h2&gt;

&lt;p&gt;Siendo completamente honesto, hubo dos bloques que me llevaron más tiempo que el resto.&lt;/p&gt;

&lt;p&gt;El primero fue &lt;strong&gt;Azure Cognitive Search&lt;/strong&gt;. La arquitectura de indexación, los skillsets de enriquecimiento y cómo se conectan todos los componentes entre sí (datasource, indexer, index, skillset) no es intuitiva la primera vez que la ves. Tuve que releerlo varias veces y hacer ejercicios prácticos en el portal hasta que empezó a tener sentido como sistema.&lt;/p&gt;

&lt;p&gt;El segundo fue &lt;strong&gt;Azure OpenAI&lt;/strong&gt;, no por dificultad técnica sino porque el servicio evolucionó muy rápido y parte del material disponible en internet estaba desactualizado al momento de estudiar. Acá confiar en la documentación oficial de Microsoft Learn fue fundamental, porque es lo que Microsoft actualiza primero.&lt;/p&gt;




&lt;h2&gt;
  
  
  Rendir en inglés o en español
&lt;/h2&gt;

&lt;p&gt;Elegí rendir el examen &lt;strong&gt;en inglés&lt;/strong&gt; y lo recomiendo. Las traducciones al español de los exámenes de Microsoft no siempre son precisas, algunos términos técnicos pierden matiz o se traducen de forma que genera confusión innecesaria. Si tenés un nivel de inglés técnico suficiente para leer documentación, rendilo en inglés. Vale la pena.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lo aprobé al primer intento
&lt;/h2&gt;

&lt;p&gt;Sí, lo aprobé al primer intento. No lo digo para presumir sino para que quede claro que con dos meses de preparación consistente y los recursos correctos es completamente alcanzable, incluso desde Argentina y sin acceso a laboratorios empresariales de Azure. La clave estuvo en combinar teoría con práctica real desde el principio, no esperar a "terminar de estudiar" para tocar el portal.&lt;/p&gt;




&lt;h2&gt;
  
  
  Recursos que recomiendo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Código Facilito&lt;/strong&gt; para el bootcamp y el voucher: &lt;a href="https://codigofacilito.com" rel="noopener noreferrer"&gt;codigofacilito.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Microsoft Learn, ruta AI-102&lt;/strong&gt;: &lt;a href="https://learn.microsoft.com/es-es/credentials/certifications/azure-ai-engineer" rel="noopener noreferrer"&gt;learn.microsoft.com/es-es/credentials/certifications/azure-ai-engineer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Portal de Azure&lt;/strong&gt; con una cuenta gratuita para practicar. Muchas preguntas del examen son de escenarios prácticos y nada reemplaza haber usado los servicios de primera mano.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YouTube&lt;/strong&gt; para ver demostraciones de Language Studio, Vision Studio y Azure OpenAI Studio. Ver los servicios en acción complementa muy bien la lectura de documentación.&lt;/p&gt;




&lt;h2&gt;
  
  
  Lo que viene: el AI-102 se retira el 30 de junio de 2026
&lt;/h2&gt;

&lt;p&gt;Y acá viene la parte que más importa si estás leyendo esto en 2026: &lt;strong&gt;el AI-102 se retira definitivamente el 30 de junio de 2026&lt;/strong&gt;. A partir de esa fecha no vas a poder rendirlo, repetirlo ni renovarlo. Es una fecha dura, sin excepciones.&lt;/p&gt;

&lt;p&gt;Microsoft lo confirmó oficialmente en su página de Microsoft Learn: &lt;em&gt;"This certification, related exam, and renewal assessments will retire on June 30, 2026. You will no longer be able to earn or renew this certification after this date."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Si ya tenés la certificación, no te preocupes: no se revoca ni se invalida. Sigue en tu transcript hasta su fecha de expiración natural. Pero si necesitás renovarla, tenés que hacerlo antes del 30 de junio, después de esa fecha la opción desaparece.&lt;/p&gt;

&lt;p&gt;Y si estás pensando en rendirlo recién ahora, siendo honesto: la ventana es muy corta y el mismo Microsoft recomienda prepararse para el nuevo examen en lugar del AI-102. Los materiales de entrenamiento oficiales del AI-102 ya se retiraron en abril de 2026, lo que complica bastante el estudio desde cero.&lt;/p&gt;




&lt;h2&gt;
  
  
  El reemplazo: AI-103, Azure AI App and Agent Developer
&lt;/h2&gt;

&lt;p&gt;El sucesor del AI-102 se llama &lt;strong&gt;AI-103: Azure AI App and Agent Developer Associate&lt;/strong&gt;, y ya tiene disponible su versión beta desde abril de 2026. La disponibilidad general se espera para junio de 2026, casi en simultáneo con el retiro del AI-102.&lt;/p&gt;

&lt;p&gt;Este cambio no es solo un número nuevo. Es un giro de fondo en lo que Microsoft espera de un AI Engineer hoy. Ya no alcanza con conocer los servicios de Azure o tener base en datos: el nuevo estándar exige saber &lt;strong&gt;cómo orquestar la inteligencia artificial en escenarios reales y en contextos de negocio&lt;/strong&gt;. Agentes autónomos, flujos de razonamiento en múltiples pasos, integración de modelos generativos en aplicaciones productivas, orquestación multi-agente, todo eso es lo que el AI-103 evalúa.&lt;/p&gt;

&lt;p&gt;La plataforma central del nuevo examen es &lt;strong&gt;Microsoft Foundry&lt;/strong&gt;, y el enfoque es desarrollo de aplicaciones de IA listas para producción, no solo conocimiento de los servicios. Es un salto de saber usar la IA a saber construir con ella a escala.&lt;/p&gt;

&lt;p&gt;Este cambio no me sorprende. La industria viene moviéndose exactamente en esa dirección y tiene mucho sentido que Microsoft actualice sus certificaciones para reflejarlo. Quien quiera mantenerse relevante en el ecosistema de Azure AI va a necesitar adaptarse a este nuevo estándar más temprano que tarde.&lt;/p&gt;




&lt;h2&gt;
  
  
  Para cerrar
&lt;/h2&gt;

&lt;p&gt;El AI-102 fue una certificación que valió cada hora de estudio. Me obligó a conocer en profundidad el ecosistema de IA de Azure, y eso se traduce en proyectos reales. Pero el mundo de la IA se mueve rápido, y Microsoft se está moviendo con él.&lt;/p&gt;

&lt;p&gt;Si estás empezando a prepararte hoy, mi consejo es claro: mirá directamente el AI-103. Asegurate de tener la AZ-900 como base, familiarizate con Microsoft Foundry y Azure AI Foundry, y enfocá tu estudio en escenarios agénticos y generativos. Es hacia donde va todo.&lt;/p&gt;

&lt;p&gt;Cualquier duda podés escribirme en &lt;a href="https://carlosjcastrog.com" rel="noopener noreferrer"&gt;carlosjcastrog.com&lt;/a&gt; o encontrarme en &lt;a href="https://www.linkedin.com/in/carlosjcastrog" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Carlos José Castro Galante es Desarrollador Full Stack y Azure AI Engineer certificado por Microsoft (AI-102, AI-900, AZ-900) e ITBA. Disponible para proyectos freelance desde Argentina.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>azure</category>
      <category>microsoft</category>
      <category>programming</category>
    </item>
    <item>
      <title>Building a Real Burger E-commerce Taught Me More Than Any Tutorial Ever Did</title>
      <dc:creator>Carlos José Castro Galante</dc:creator>
      <pubDate>Fri, 03 Apr 2026 18:52:00 +0000</pubDate>
      <link>https://dev.to/carlosjcastrog/building-a-real-burger-e-commerce-taught-me-more-than-any-tutorial-ever-did-3ggo</link>
      <guid>https://dev.to/carlosjcastrog/building-a-real-burger-e-commerce-taught-me-more-than-any-tutorial-ever-did-3ggo</guid>
      <description>&lt;p&gt;When I started this project, I thought I knew exactly what I was doing.&lt;/p&gt;

&lt;p&gt;A small burger place needed a digital menu, a way to take orders, and eventually accept payments. Nothing too ambitious. I had built interfaces before, worked with APIs, handled state.&lt;/p&gt;

&lt;p&gt;It felt like something I could finish quickly.&lt;/p&gt;

&lt;p&gt;That assumption didn’t last long.&lt;/p&gt;

&lt;p&gt;What looked like a simple menu turned into a constant series of small decisions that actually mattered. Not the kind you solve with a library or a tutorial, but the kind that come from dealing with real users and real constraints.&lt;/p&gt;




&lt;p&gt;The first thing that changed my perspective was the data.&lt;/p&gt;

&lt;p&gt;At the beginning, I treated products like static items. Name, price, image. Render a card and move on.&lt;/p&gt;

&lt;p&gt;But that model breaks almost immediately in a real scenario.&lt;/p&gt;

&lt;p&gt;Some burgers had multiple sizes. Others didn’t. Some had temporary discounts. Drinks were fixed. Combos mixed different rules. And then there was stock, which isn’t something you can fake if someone is actually trying to buy.&lt;/p&gt;

&lt;p&gt;The UI started getting messy, not because of styling, but because the data didn’t reflect reality.&lt;/p&gt;

&lt;p&gt;I had to step back and stop thinking in terms of components. The real problem wasn’t how things looked, but how things were defined.&lt;/p&gt;

&lt;p&gt;Once I restructured the data to describe behavior instead of just content, everything became easier to reason about. The UI stopped fighting back.&lt;/p&gt;




&lt;p&gt;Then came the cart.&lt;/p&gt;

&lt;p&gt;This is where things quietly fall apart if you’re not careful.&lt;/p&gt;

&lt;p&gt;Adding a product is easy. But a product with variations, custom notes, and dynamic pricing is not just “an item” anymore.&lt;/p&gt;

&lt;p&gt;At one point, too much of that logic lived inside components. It worked, but it was fragile. Every small change had side effects somewhere else.&lt;/p&gt;

&lt;p&gt;I moved that logic into a centralized context, not because it was trendy, but because I needed control.&lt;/p&gt;

&lt;p&gt;After that, the components became predictable again. They stopped being responsible for decisions and went back to doing what they should do: represent state.&lt;/p&gt;

&lt;p&gt;That shift made the whole app feel stable.&lt;/p&gt;




&lt;p&gt;The WhatsApp integration was another moment where expectations didn’t match reality.&lt;/p&gt;

&lt;p&gt;Technically, sending a message is trivial. It’s just a link.&lt;/p&gt;

&lt;p&gt;But what arrives on the other end matters more than how you send it.&lt;/p&gt;

&lt;p&gt;If the message is messy, incomplete, or hard to read, the business suffers. Orders get misunderstood. Time is lost. Mistakes happen.&lt;/p&gt;

&lt;p&gt;So instead of thinking about the integration itself, I focused on the output.&lt;/p&gt;

&lt;p&gt;I built a formatter that turns the cart into something structured and readable. Clear product names, quantities, totals, user info, delivery details.&lt;/p&gt;

&lt;p&gt;Now the message actually works for the person receiving it.&lt;/p&gt;

&lt;p&gt;That changed more than I expected.&lt;/p&gt;




&lt;p&gt;At some point I started looking into payments, specifically Mercado Pago.&lt;/p&gt;

&lt;p&gt;That’s when I realized something I had been ignoring.&lt;/p&gt;

&lt;p&gt;Payments don’t fix a bad flow.&lt;/p&gt;

&lt;p&gt;If the process is confusing, adding a payment button just makes things worse. People don’t complete what they don’t understand.&lt;/p&gt;

&lt;p&gt;So I paused that part and focused on making the ordering experience feel natural first. Clear steps, no surprises, no friction that didn’t need to exist.&lt;/p&gt;

&lt;p&gt;Only after that does it make sense to introduce payments.&lt;/p&gt;




&lt;p&gt;What this project really did was change how I approach building things.&lt;/p&gt;

&lt;p&gt;I stopped thinking in terms of “features” and started thinking in terms of behavior.&lt;/p&gt;

&lt;p&gt;I stopped assuming that a clean UI means a simple system.&lt;/p&gt;

&lt;p&gt;And more importantly, I stopped relying on ideal scenarios.&lt;/p&gt;

&lt;p&gt;Real users don’t follow perfect paths. They make unexpected choices, skip steps, change their minds. If your system can’t handle that, it doesn’t matter how good your code looks.&lt;/p&gt;




&lt;p&gt;I didn’t build something revolutionary.&lt;/p&gt;

&lt;p&gt;But I built something that works in a real environment, with real constraints, and that forced me to make better decisions than any tutorial ever did.&lt;/p&gt;

&lt;p&gt;And that, at least for me, was the part that actually mattered.&lt;/p&gt;




&lt;h2&gt;
  
  
  About me
&lt;/h2&gt;

&lt;p&gt;I’m Carlos José Castro Galante, a software developer focused on building real-world applications that combine frontend, automation, and practical AI.&lt;/p&gt;

</description>
      <category>react</category>
      <category>typescript</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Python Data Analysis Project: Building a Learning Radar for Educational Insights</title>
      <dc:creator>Carlos José Castro Galante</dc:creator>
      <pubDate>Tue, 31 Mar 2026 20:50:17 +0000</pubDate>
      <link>https://dev.to/carlosjcastrog/python-data-analysis-project-building-a-learning-radar-for-educational-insights-3c8j</link>
      <guid>https://dev.to/carlosjcastrog/python-data-analysis-project-building-a-learning-radar-for-educational-insights-3c8j</guid>
      <description>&lt;p&gt;If you are learning Python and data science, one of the best ways to grow is by building real world projects. In this article I will show how I built a complete data analysis project using Python to extract insights from online education data.&lt;/p&gt;

&lt;p&gt;This is not a typical beginner project. The goal was to create something useful, scalable, and portfolio ready.&lt;/p&gt;

&lt;p&gt;The result is Learning Radar, a data driven system designed to analyze course reviews and help understand what really makes an online course valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why build a data analysis project like this&lt;/strong&gt;&lt;br&gt;
Most Python data science tutorials focus on small datasets and simple examples. In real scenarios, data is messy, large, and comes from different sources.&lt;/p&gt;

&lt;p&gt;This project focuses on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working with large datasets&lt;/li&gt;
&lt;li&gt;Cleaning and transforming real data&lt;/li&gt;
&lt;li&gt;Performing exploratory data analysis&lt;/li&gt;
&lt;li&gt;Creating meaningful data visualizations&lt;/li&gt;
&lt;li&gt;Generating insights that solve real problems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is designed to reflect real data science workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project goal&lt;/strong&gt;&lt;br&gt;
The main objective was to analyze thousands of course reviews and answer key questions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What factors influence course ratings&lt;/li&gt;
&lt;li&gt;How difficulty impacts student satisfaction&lt;/li&gt;
&lt;li&gt;Which categories perform better&lt;/li&gt;
&lt;li&gt;What patterns exist in user feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of just analyzing data, I focused on building an educational intelligence tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dataset and data sources&lt;/strong&gt;&lt;br&gt;
To meet the requirement of working with more than 50000 rows, I combined multiple public datasets related to online courses.&lt;/p&gt;

&lt;p&gt;The final dataset includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Course title&lt;/li&gt;
&lt;li&gt;Category&lt;/li&gt;
&lt;li&gt;Rating&lt;/li&gt;
&lt;li&gt;Review text&lt;/li&gt;
&lt;li&gt;Difficulty level&lt;/li&gt;
&lt;li&gt;Engagement indicators&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Combining datasets allowed me to create a richer and more useful analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data cleaning and preprocessing in Python&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Data cleaning is one of the most important steps in any data science project.&lt;/p&gt;

&lt;p&gt;I used Python and Pandas to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove missing values&lt;/li&gt;
&lt;li&gt;Normalize column names&lt;/li&gt;
&lt;li&gt;Convert data types&lt;/li&gt;
&lt;li&gt;Clean text data from reviews&lt;/li&gt;
&lt;li&gt;Remove duplicates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also created new features to improve analysis:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review length&lt;/li&gt;
&lt;li&gt;Rating groups&lt;/li&gt;
&lt;li&gt;Difficulty mapping&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step ensures accuracy and consistency in the results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exploratory Data Analysis with Pandas&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Exploratory Data Analysis is where the real insights begin.&lt;/p&gt;

&lt;p&gt;Using Pandas, I explored:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distribution of ratings&lt;/li&gt;
&lt;li&gt;Average rating by category&lt;/li&gt;
&lt;li&gt;Relationship between difficulty and rating&lt;/li&gt;
&lt;li&gt;Patterns in review behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This step helps understand the structure of the data and identify trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key insights from the analysis&lt;/strong&gt;&lt;br&gt;
Some interesting findings from this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Courses with medium difficulty often receive better ratings&lt;/li&gt;
&lt;li&gt;Very long reviews usually reflect strong opinions&lt;/li&gt;
&lt;li&gt;Some categories consistently perform better&lt;/li&gt;
&lt;li&gt;High engagement does not always correlate with high ratings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These insights can help students choose better courses and help educators improve content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project structure and best practices&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make the project scalable and professional, I organized it as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;notebooks for analysis&lt;/li&gt;
&lt;li&gt;data folder for datasets&lt;/li&gt;
&lt;li&gt;src for reusable code&lt;/li&gt;
&lt;li&gt;assets for visualizations&lt;/li&gt;
&lt;li&gt;README for documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure follows good software engineering practices and improves maintainability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technologies used&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Pandas&lt;/li&gt;
&lt;li&gt;NumPy&lt;/li&gt;
&lt;li&gt;Matplotlib&lt;/li&gt;
&lt;li&gt;Seaborn&lt;/li&gt;
&lt;li&gt;Jupyter Notebook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools are widely used in data science and provide a strong foundation for analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges faced&lt;/strong&gt;&lt;br&gt;
Working with large datasets introduced several challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memory optimization&lt;/li&gt;
&lt;li&gt;Data consistency across sources&lt;/li&gt;
&lt;li&gt;Cleaning unstructured text data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These were solved by optimizing data types, validating merges, and applying systematic preprocessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future improvements&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project can be extended in many ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sentiment analysis using Natural Language Processing&lt;/li&gt;
&lt;li&gt;Machine learning models to predict course success&lt;/li&gt;
&lt;li&gt;Interactive dashboards using Streamlit&lt;/li&gt;
&lt;li&gt;Automated data pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The long term goal is to transform this into a full educational analytics platform.&lt;/p&gt;

</description>
      <category>python</category>
      <category>datascience</category>
      <category>machinelearning</category>
      <category>coding</category>
    </item>
  </channel>
</rss>
