<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Madhu Kumar</title>
    <description>The latest articles on DEV Community by Madhu Kumar (@cloudgeek7).</description>
    <link>https://dev.to/cloudgeek7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cloudgeek7"/>
    <language>en</language>
    <item>
      <title>Introducing Amazon Nova Act: The Future of AI-Powered Web Automation</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Fri, 04 Apr 2025 16:41:18 +0000</pubDate>
      <link>https://dev.to/aws-heroes/introducing-amazon-nova-act-the-future-of-ai-powered-web-automation-11np</link>
      <guid>https://dev.to/aws-heroes/introducing-amazon-nova-act-the-future-of-ai-powered-web-automation-11np</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foben8o2sf28qfvc5pjj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foben8o2sf28qfvc5pjj5.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Era of AI Agents
&lt;/h2&gt;

&lt;p&gt;Today (March 31, 2025), AWS introduced Amazon Nova Act, a revolutionary AI model designed to perform actions within a web browser. Alongside this, AWS is launching the Nova Act SDK, which is available as a research preview at &lt;a href="https://nova.amazon.com" rel="noopener noreferrer"&gt;nova.amazon.com&lt;/a&gt;. This SDK enables developers to experiment with the early version of &lt;a href="https://github.com/aws/nova-act" rel="noopener noreferrer"&gt;Nova Act&lt;/a&gt;, allowing them to build AI agents capable of completing tasks in a web browser, such as submitting an out-of-office request, setting calendar holds, and even composing ‘away from office’ emails.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53g41op85crebsx03ubf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53g41op85crebsx03ubf.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Traditional AI Agents
&lt;/h2&gt;

&lt;p&gt;The concept of AI agents has evolved significantly. Initially, agents were designed primarily for natural language interactions and retrieval-augmented generation (RAG). However, AWS envisions agents as &lt;strong&gt;systems that actively perform complex digital and physical tasks on behalf of users.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While existing agents rely heavily on API integrations, Nova Act extends capabilities beyond API-covered scenarios. The dream is to create agents that can handle multi-step, complex workflows, such as organising events, managing IT tasks, and boosting productivity, without constant human supervision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Addressing Today’s Challenges in AI Agents
&lt;/h2&gt;

&lt;p&gt;One of the major limitations of current AI agents is their inability to reliably complete multi-step tasks without human intervention. Nova Act aims to solve this problem by offering a highly structured approach to workflow automation. The Nova Act SDK provides developers with reliable atomic commands, such as search, checkout, and answering on-screen queries, while allowing for detailed, customisable instructions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of Nova Act SDK:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reliable Atomic Commands&lt;/strong&gt; — Execute actions like selecting dates, interacting with drop-down menus, and handling pop-ups with high accuracy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detailed Command Customisation&lt;/strong&gt; — Developers can add specific instructions (e.g., “do not accept insurance upsell”) to ensure precise execution.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API and Playwright Integration&lt;/strong&gt; — Offers a combination of API calls and direct browser manipulation via Playwright for enhanced reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python Code Integration&lt;/strong&gt; — Allows interleaving Python scripts for testing, breakpoints, assertions, and parallelisation to optimise performance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Best-in-Class Performance&lt;/strong&gt;&lt;br&gt;
The Nova Act model has been evaluated rigorously against industry benchmarks, achieving best-in-class performance compared to other leading models like &lt;strong&gt;Claude 3.7 Sonnet and OpenAI CUA&lt;/strong&gt;. Below are the benchmark results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9viqnu88dd51pe3tlz8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9viqnu88dd51pe3tlz8o.png" alt="Image description" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s7flewifz0c3xsxfy33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4s7flewifz0c3xsxfy33.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These results highlight &lt;strong&gt;Nova Act’s superior ability to interact with web elements accurately,&lt;/strong&gt; significantly outperforming competitors in key areas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Headless Execution and Automation
&lt;/h2&gt;

&lt;p&gt;Once configured, Nova Act agents can &lt;strong&gt;run autonomously without human supervision.&lt;/strong&gt; Developers can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable &lt;strong&gt;headless mode&lt;/strong&gt; for seamless execution.&lt;/li&gt;
&lt;li&gt;Convert their agent into an &lt;strong&gt;API&lt;/strong&gt; for seamless product integration.&lt;/li&gt;
&lt;li&gt;Schedule automated tasks, such as &lt;strong&gt;ordering dinner every Tuesday night.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Expanding AI Capabilities Beyond APIs
&lt;/h2&gt;

&lt;p&gt;An exciting aspect of Nova Act is its ability to generalise across &lt;strong&gt;novel digital environments&lt;/strong&gt; — even those it was not explicitly trained for. Initial results indicate promising performance in scenarios like &lt;strong&gt;navigating web-based games&lt;/strong&gt;, suggesting broad adaptability beyond traditional browser interactions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Application: Alexa+
&lt;/h2&gt;

&lt;p&gt;Nova Act has already been integrated into Alexa+, where it autonomously navigates the web to complete user tasks when direct API connections are unavailable. This represents a major step toward self-directed AI agents that can function independently within complex digital ecosystems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of AI Agents
&lt;/h2&gt;

&lt;p&gt;Nova Act marks the &lt;strong&gt;first milestone&lt;/strong&gt; in AWS’s broader mission to develop &lt;strong&gt;scalable, intelligent, and reliable AI agents.&lt;/strong&gt; AWS’s long-term vision extends beyond simple LLM fine-tuning; AWS aims to enhance AI through &lt;strong&gt;reinforcement learning&lt;/strong&gt; across diverse environments, creating agents that can reliably execute complex, multi-step workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Join the Innovation&lt;/strong&gt;&lt;br&gt;
AWS believes that &lt;strong&gt;the most valuable AI use cases have yet to be discovered.&lt;/strong&gt; With the Nova Act SDK, AWS invites developers and innovators to explore new possibilities through &lt;strong&gt;rapid prototyping and iterative feedback.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Amazon Nova Act SDK&lt;/strong&gt; is available for &lt;strong&gt;research preview&lt;/strong&gt; at &lt;a href="https://nova.amazon.com" rel="noopener noreferrer"&gt;nova.amazon.com&lt;/a&gt;. This is just the beginning of an exciting journey in AI-driven web automation. &lt;strong&gt;Stay tuned for more updates as AWS continues to push the boundaries of intelligent automation!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://labs.amazon.science/blog/nova-act" rel="noopener noreferrer"&gt;https://labs.amazon.science/blog/nova-act&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>aws</category>
    </item>
    <item>
      <title>Think Fast, Code Faster: Local AI Development with Docker Model Runner</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sun, 30 Mar 2025 09:38:35 +0000</pubDate>
      <link>https://dev.to/docker/think-fast-code-faster-local-ai-development-with-docker-model-runner-2878</link>
      <guid>https://dev.to/docker/think-fast-code-faster-local-ai-development-with-docker-model-runner-2878</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oxi36r1th7r1aepd7zy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oxi36r1th7r1aepd7zy.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker has taken a bold step in the AI space with its new experimental feature, &lt;strong&gt;Docker Model Runner&lt;/strong&gt;, introduced in Docker Desktop for Mac 4.40+. This feature brings Large Language Model (LLM) inference directly to your machine, integrating seamlessly into Docker’s familiar ecosystem. With native GPU acceleration and the flexibility of running AI models locally, Docker Model Runner is a game-changer for developers, data scientists, and AI enthusiasts.&lt;/p&gt;

&lt;p&gt;Let’s dive into what Docker Model Runner is, how it works, and how you can get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Docker Model Runner?
&lt;/h2&gt;

&lt;p&gt;Docker Model Runner provides a Docker-native experience for running LLMs &lt;strong&gt;locally on Apple Silicon Macs&lt;/strong&gt;, leveraging GPU acceleration. Unlike traditional containerised AI models, Model Runner uses a host-installed inference server (currently powered by &lt;strong&gt;llama.cpp&lt;/strong&gt;) to maximise performance. For those using &lt;strong&gt;Windows&lt;/strong&gt; with NVIDIA GPUs, support is expected in &lt;strong&gt;April 2025&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This feature allows developers to build, test, and iterate on AI applications without relying on cloud-based APIs. Whether you’re developing AI assistants or integrating AI into existing apps, Docker Model Runner has you covered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Benefits of Docker Model Runner&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local AI Inference:&lt;/strong&gt; Perform inference tasks without external API calls, ensuring data privacy.&lt;br&gt;
&lt;strong&gt;GPU Acceleration:&lt;/strong&gt; Harness the full power of Apple Silicon GPUs for lightning-fast model execution.&lt;br&gt;
&lt;strong&gt;Seamless Docker Integration:&lt;/strong&gt; Manage AI models using familiar Docker CLI commands.&lt;br&gt;
&lt;strong&gt;Faster Iterations:&lt;/strong&gt; Test and iterate AI applications locally with minimal latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native Docker Integration with docker model CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker Desktop 4.40+ introduces the docker model CLI, making AI models first-class citizens within Docker. You can now pull, run, inspect, and manage models with commands similar to managing containers, images, and volumes.&lt;/p&gt;

&lt;p&gt;Here are a few useful commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model --help  # List available commands
docker model list    # List the available models that can be run with the Docker Model Runner
docker model status  # Check if the Docker Model Runner is running
docker model ls      # List downloaded models
docker model pull    # Download a model
docker model run     # Run a model with the Docker Model Runner
docker model rm      # Remove a model downloaded from Docker Hub
docker model version # Show the Docker Model Runner version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How Does Docker Model Runner Work?
&lt;/h2&gt;

&lt;p&gt;Unlike traditional containers, Docker Model Runner runs AI models directly on your machine. It uses llama.cpp as the inference server, bypassing the containerisation layer to minimise overhead and maximise GPU utilisation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Technical Insights&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Host-level Process:&lt;/strong&gt; Models run directly on the host system using llama.cpp.&lt;br&gt;
&lt;strong&gt;GPU Acceleration:&lt;/strong&gt; Apple Silicon’s Metal API is used for GPU acceleration.&lt;br&gt;
M*&lt;em&gt;odel Loading:&lt;/em&gt;* Models are pulled from Docker Hub and cached locally. They are dynamically loaded into memory for efficient inference.&lt;/p&gt;

&lt;p&gt;This architecture results in faster model execution than containerised AI solutions, reducing development time and improving the developer experience.&lt;/p&gt;
&lt;h2&gt;
  
  
  Who Should Use Docker Model Runner?
&lt;/h2&gt;

&lt;p&gt;Docker Model Runner is designed for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Developers:&lt;/strong&gt; Build and test GenAI-powered apps locally.&lt;br&gt;
&lt;strong&gt;Data Scientists:&lt;/strong&gt; Efficiently run and evaluate models using local GPU resources.&lt;br&gt;
&lt;strong&gt;Application Developers:&lt;/strong&gt; Integrate AI capabilities into applications without leaving the Docker ecosystem.&lt;br&gt;
&lt;strong&gt;Teams Developing AI Prototypes:&lt;/strong&gt; Rapidly iterate on AI applications without cloud dependency.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started with Docker Model Runner
&lt;/h2&gt;

&lt;p&gt;Ready to experience Docker Model Runner? Follow these steps to get started:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Docker Desktop 4.40+&lt;/strong&gt;&lt;br&gt;
Ensure you have Docker Desktop 4.40 or later installed on your Apple Silicon Mac. You can download the latest version from the Docker website. &lt;/p&gt;

&lt;p&gt;My Docker Desktop Version is "&lt;strong&gt;Docker Desktop 4.40.0 (186969)&lt;/strong&gt; is currently the newest version available."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Enable Model Runner&lt;/strong&gt;&lt;br&gt;
Open Docker Desktop.&lt;br&gt;
Navigate to &lt;strong&gt;Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Features in Development&lt;/strong&gt;.&lt;br&gt;
Ensure &lt;strong&gt;Docker Model Runner&lt;/strong&gt; is enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtrwb9hj9jqsgl5tawj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxtrwb9hj9jqsgl5tawj2.png" alt="Image description" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For TCP support, also enable “Enable host-side TCP support”. This allows models to accept connections on port 12434 by default.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Verify Installation&lt;/strong&gt;&lt;br&gt;
Run the following command to check if Model Runner is active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker model status
# Output: Docker Model Runner is running

~/Documents/docker  docker model status
Docker Model Runner is running
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Download a Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Download a model from Docker Hub using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker model pull ai/llama3.2:1B-Q8_0

 ~/Documents/docker  docker model pull ai/llama3.2:1B-Q8_0
Downloaded: 1259.13 MB
Model ai/llama3.2:1B-Q8_0 pulled successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can choose from a range of models, including:&lt;/p&gt;

&lt;p&gt;ai/llama3.2&lt;br&gt;
ai/gemma3&lt;br&gt;
ai/qwen2.5&lt;br&gt;
ai/mistral&lt;br&gt;
Refer here for more models: &lt;a href="https://hub.docker.com/u/ai" rel="noopener noreferrer"&gt;https://hub.docker.com/u/ai&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Run a Model&lt;/strong&gt;&lt;br&gt;
Send a simple input to your model using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker model run ai/llama3.2:1B-Q8_0 "Hi, how are you?"
# example Output: Hello! How can I help you today?

 ~/Documents/docker  docker model run ai/llama3.2:1B-Q8_0 "Hi, how are you?"
I'm just a computer program, so I don't have emotions or feelings like humans do, but thanks for asking! How can I assist you today?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;You can also run models in interactive mode:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker model run ai/llama3.2:1B-Q8_0
&amp;gt; Why is the sky blue?


 ~/Documents/docker  docker model run ai/llama3.2:1B-Q8_0
&amp;gt; Why is the sky blue?
Interactive chat mode started. Type '/bye' to exit.
&amp;gt; Why is the sky blue?
The sky appears blue because of a phenomenon called Rayleigh scattering, named after the British physicist Lord Rayleigh, who first described it in the late 19th century.

Here's what happens:

**The Short Answer:** The shorter (blue) wavelengths of light are scattered more than the longer (red) wavelengths, giving the sky its blue appearance.

**The Longer Answer:**

When sunlight enters Earth's atmosphere, it encounters tiny molecules of gases such as nitrogen (N2) and oxygen (O2). These molecules are much smaller than the wavelength of light, and they scatter the light in all directions. This scattering effect is more pronounced for shorter wavelengths, like blue and violet, due to the smaller particle sizes.

There are two types of scattering that occur in the atmosphere:

1. **Rayleigh scattering**: This type of scattering occurs when light is scattered by small particles, like the molecules of gases in the atmosphere. The shorter wavelengths (like blue and violet) are scattered more than the longer wavelengths (like red and orange). This is because the smaller particles are more effective at scattering the shorter wavelengths.
2. **Mie scattering**: This type of scattering occurs when light is scattered by larger particles, like dust and pollen. The longer wavelengths (like red and orange) are scattered less than the shorter wavelengths.

**Why the blue color dominates:**

As sunlight enters the atmosphere, it encounters more Rayleigh scattering than Mie scattering. The scattered blue light is then reflected back into our eyes, creating the illusion of a blue sky. The longer wavelengths, like red and orange, are scattered in all directions and are not reflected back, making them appear more red.

**Additional factors:**

Other factors can also influence the color of the sky, such as:

* **Dust and pollution**: Tiny particles in the atmosphere can scatter light in different ways, changing the color of the sky.
* **Clouds**: Clouds can reflect or absorb light, changing the color of the sky.
* **Time of day and year**: The angle of the sun and the position of the Earth in its orbit can also affect the color of the sky.

Now, the next time you gaze up at the sky, remember that the blue color is a result of a fascinating combination of Rayleigh and Mie scattering!
&amp;gt; /bye
Chat session ended.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 6: Remove a Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you no longer need a model, remove it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker model rm ai/llama3.2:1B-Q8_0

 ~/Doc/docker  docker model rm ai/llama3.2:1B-Q8_0
Model ai/llama3.2:1B-Q8_0 removed successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using Model Runner in Your Application
&lt;/h2&gt;

&lt;p&gt;You can integrate Model Runner into your AI applications using its OpenAI-compatible endpoints. Follow these steps to build a GenAI application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Download the Model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker model pull ai/llama3.2:1B-Q8_0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Clone the Repository from the official Docker GitHub&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/docker/hello-genai" rel="noopener noreferrer"&gt;https://github.com/docker/hello-genai&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/docker/hello-genai
cd hello-genai

 ~/Documents/docker  git clone https://github.com/docker/hello-genai
cd hello-genai
Cloning into 'hello-genai'...
remote: Enumerating objects: 31, done.
remote: Counting objects: 100% (31/31), done.
remote: Compressing objects: 100% (26/26), done.
remote: Total 31 (delta 4), reused 27 (delta 2), pack-reused 0 (from 0)
Receiving objects: 100% (31/31), 15.49 KiB | 7.75 MiB/s, done.
Resolving deltas: 100% (4/4), done.

 ~/Documents/docker/hello-genai  main  ll
total 56
-rw-r--r--  1 macpro  staff   413B 29 Mar 18:15 Dockerfile
-rw-r--r--  1 macpro  staff    11K 29 Mar 18:15 LICENSE
-rw-r--r--  1 macpro  staff   1.1K 29 Mar 18:15 README.md
-rw-r--r--  1 macpro  staff   800B 29 Mar 18:15 docker-compose.yml
drwxr-xr-x  7 macpro  staff   224B 29 Mar 18:15 go-genai
drwxr-xr-x  6 macpro  staff   192B 29 Mar 18:15 node-genai
drwxr-xr-x  7 macpro  staff   224B 29 Mar 18:15 py-genai
-rwxr-xr-x  1 macpro  staff   517B 29 Mar 18:15 run.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Configure the Backend: Set the environment variables&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BASE_URL=http://model-runner.docker.internal/engines/llama.cpp/v1/
MODEL=ai/llama3.2:1B-Q8_0

~/Documents/docker/hello-genai BASE_URL=http://model-runner.docker.internal/engines/llama.cpp/v1/
MODEL=ai/llama3.2:1B-Q8_0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Start the Application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: docker compose up -d


 ~/Documents/docker/hello-genai  main  docker compose up -d

Compose now can delegate build to bake for better performances
Just set COMPOSE_BAKE=true
[+] Building 12.1s (47/47) FINISHED                                                                    docker:desktop-linux
 =&amp;gt; [node-genai internal] load build definition from Dockerfile                                                        0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 329B                                                                                   0.0s
 =&amp;gt; [go-genai internal] load build definition from Dockerfile                                                          0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 706B                                                                                   0.0s
 =&amp;gt; [python-genai internal] load build definition from Dockerfile                                                      0.0s
 =&amp;gt; =&amp;gt; transferring dockerfile: 421B                                                                                   0.0s
 =&amp;gt; [go-genai internal] load metadata for docker.io/library/alpine:latest                                              1.3s
 =&amp;gt; [go-genai internal] load metadata for docker.io/library/golang:1.21-alpine                                         1.3s
 =&amp;gt; [python-genai internal] load metadata for docker.io/library/python:3.11-slim                                       1.3s
 =&amp;gt; [node-genai internal] load metadata for docker.io/library/node:20-alpine                                           1.3s
 =&amp;gt; [go-genai auth] library/alpine:pull token for registry-1.docker.io                                                 0.0s
 =&amp;gt; [go-genai auth] library/golang:pull token for registry-1.docker.io                                                 0.0s
 =&amp;gt; [python-genai auth] library/python:pull token for registry-1.docker.io                                             0.0s
 =&amp;gt; [node-genai auth] library/node:pull token for registry-1.docker.io                                                 0.0s
 =&amp;gt; [python-genai internal] load .dockerignore                                                                         0.0s
 =&amp;gt; =&amp;gt; transferring context: 271B                                                                                      0.0s
 =&amp;gt; [go-genai internal] load .dockerignore                                                                             0.0s
 =&amp;gt; =&amp;gt; transferring context: 239B                                                                                      0.0s
 =&amp;gt; [node-genai internal] load .dockerignore                                                                           0.0s
 =&amp;gt; =&amp;gt; transferring context: 2B                                                                                        0.0s
 =&amp;gt; [python-genai 1/7] FROM docker.io/library/python:3.11-slim@sha256:7029b00486ac40bed03e36775b864d3f3d39dcbdf19cd45  3.5s
 =&amp;gt; =&amp;gt; resolve docker.io/library/python:3.11-slim@sha256:7029b00486ac40bed03e36775b864d3f3d39dcbdf19cd45e6a52d541e6c1  0.0s
 =&amp;gt; =&amp;gt; sha256:7029b00486ac40bed03e36775b864d3f3d39dcbdf19cd45e6a52d541e6c178f0 9.13kB / 9.13kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:b302c93452310b37ffcd9436bbfdb0a3d5b60b19dd3c86badb83b66d20c38ca4 1.75kB / 1.75kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:f4c484c0338e730d3e53480fb48088dc3b3a016858b94559ce7087577ae3b204 5.31kB / 5.31kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:d9b6365477446a79987b20560ae52637be6f54d6d2f801e16aaa0ca25dd0964b 28.04MB / 28.04MB                       1.3s
 =&amp;gt; =&amp;gt; sha256:3f529d1f5c642f942a2f18a72c6b14f378767cdf3f66131f04497701e8361712 3.33MB / 3.33MB                         1.0s
 =&amp;gt; =&amp;gt; sha256:18513d00e8c20c79d50d6646c0f48fe4349ead2882c93b54d94bb54917adbd90 16.13MB / 16.13MB                       1.4s
 =&amp;gt; =&amp;gt; sha256:82a6dca8a532d0999dead89c78b567015ba2ee32175dea32fefbecf004cad9d6 249B / 249B                             1.2s
 =&amp;gt; =&amp;gt; extracting sha256:d9b6365477446a79987b20560ae52637be6f54d6d2f801e16aaa0ca25dd0964b                              1.3s
 =&amp;gt; =&amp;gt; extracting sha256:3f529d1f5c642f942a2f18a72c6b14f378767cdf3f66131f04497701e8361712                              0.1s
 =&amp;gt; =&amp;gt; extracting sha256:18513d00e8c20c79d50d6646c0f48fe4349ead2882c93b54d94bb54917adbd90                              0.6s
 =&amp;gt; =&amp;gt; extracting sha256:82a6dca8a532d0999dead89c78b567015ba2ee32175dea32fefbecf004cad9d6                              0.0s
 =&amp;gt; [python-genai internal] load build context                                                                         0.0s
 =&amp;gt; =&amp;gt; transferring context: 9.84kB                                                                                    0.0s
 =&amp;gt; [go-genai builder 1/7] FROM docker.io/library/golang:1.21-alpine@sha256:2414035b086e3c42b99654c8b26e6f5b1b1598080  4.7s
 =&amp;gt; =&amp;gt; resolve docker.io/library/golang:1.21-alpine@sha256:2414035b086e3c42b99654c8b26e6f5b1b1598080d65fd03c7f499552f  0.0s
 =&amp;gt; =&amp;gt; sha256:2414035b086e3c42b99654c8b26e6f5b1b1598080d65fd03c7f499552ff4dc94 10.30kB / 10.30kB                       0.0s
 =&amp;gt; =&amp;gt; sha256:5bc6d0431a4fdc57fb24d437c177c5e02fde1d0585eeb4c5d483c1b65aebfb00 1.92kB / 1.92kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:2bbe4e7e4d4e0f6f1b6c7192f01b9c7099e921b9fe8eae0c5c939a1d257f7e81 2.10kB / 2.10kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:690e87867337b8441990047e169b892933e9006bdbcbed52ab7a356945477a4d 4.09MB / 4.09MB                         0.2s
 =&amp;gt; =&amp;gt; sha256:171883aaf475f5dea5723bb43248d9cf3f3c3a7cf5927947a8bed4836bbccb62 293.51kB / 293.51kB                     0.2s
 =&amp;gt; =&amp;gt; sha256:2a6022646f09ee78a83ef4abd0f5af04071b6563cf16a18e00fb2dcfe63ca0a3 64.11MB / 64.11MB                       1.0s
 =&amp;gt; =&amp;gt; extracting sha256:690e87867337b8441990047e169b892933e9006bdbcbed52ab7a356945477a4d                              0.1s
 =&amp;gt; =&amp;gt; sha256:e495e1face5cc12777f452389e1da15202c37ec00ba024f12f841b5c90a47057 127B / 127B                             0.5s
 =&amp;gt; =&amp;gt; sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 32B / 32B                               0.5s
 =&amp;gt; =&amp;gt; extracting sha256:171883aaf475f5dea5723bb43248d9cf3f3c3a7cf5927947a8bed4836bbccb62                              0.0s
 =&amp;gt; =&amp;gt; extracting sha256:2a6022646f09ee78a83ef4abd0f5af04071b6563cf16a18e00fb2dcfe63ca0a3                              3.6s
 =&amp;gt; =&amp;gt; extracting sha256:e495e1face5cc12777f452389e1da15202c37ec00ba024f12f841b5c90a47057                              0.0s
 =&amp;gt; =&amp;gt; extracting sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1                              0.0s
 =&amp;gt; [go-genai stage-1 1/4] FROM docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2  1.7s
 =&amp;gt; =&amp;gt; resolve docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88  0.0s
 =&amp;gt; =&amp;gt; sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c 9.22kB / 9.22kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:757d680068d77be46fd1ea20fb21db16f150468c5e7079a08a2e4705aec096ac 1.02kB / 1.02kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:8d591b0b7dea080ea3be9e12ae563eebf9869168ffced1cb25b2470a3d9fe15e 597B / 597B                             0.0s
 =&amp;gt; =&amp;gt; sha256:6e771e15690e2fabf2332d3a3b744495411d6e0b00b2aea64419b58b0066cf81 3.99MB / 3.99MB                         1.5s
 =&amp;gt; =&amp;gt; extracting sha256:6e771e15690e2fabf2332d3a3b744495411d6e0b00b2aea64419b58b0066cf81                              0.1s
 =&amp;gt; [go-genai internal] load build context                                                                             0.0s
 =&amp;gt; =&amp;gt; transferring context: 12.11kB                                                                                   0.0s
 =&amp;gt; [node-genai 1/6] FROM docker.io/library/node:20-alpine@sha256:8bda036ddd59ea51a23bc1a1035d3b5c614e72c01366d989f41  3.3s
 =&amp;gt; =&amp;gt; resolve docker.io/library/node:20-alpine@sha256:8bda036ddd59ea51a23bc1a1035d3b5c614e72c01366d989f4120e8adca196  0.0s
 =&amp;gt; =&amp;gt; sha256:8bda036ddd59ea51a23bc1a1035d3b5c614e72c01366d989f4120e8adca196d4 7.67kB / 7.67kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:646bc11400802534d70bc576fc9704bb6409e0ed802658a8e785472d160c38d3 1.72kB / 1.72kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:f06038a15a690bf7aa19f84b6d7bf2646640654b47fd5d2dd8fca56ff81f0789 6.20kB / 6.20kB                         0.0s
 =&amp;gt; =&amp;gt; sha256:6e771e15690e2fabf2332d3a3b744495411d6e0b00b2aea64419b58b0066cf81 3.99MB / 3.99MB                         1.5s
 =&amp;gt; =&amp;gt; sha256:8bfaf621f0ce4e4ddf5efac2af68a506ee03c6cb9151dd7f9b2b8937b3cfdfb5 1.26MB / 1.26MB                         1.7s
 =&amp;gt; =&amp;gt; sha256:0e3e948a6346a46c6ad228463eb896bd63a271b9b122fc90d39a8702b2349746 42.63MB / 42.63MB                       2.0s
 =&amp;gt; =&amp;gt; extracting sha256:6e771e15690e2fabf2332d3a3b744495411d6e0b00b2aea64419b58b0066cf81                              0.1s
 =&amp;gt; =&amp;gt; sha256:8d73ebbcecf2389cf3461a46c9a11972293f334e5093233adb4f1c65ef91e4e0 444B / 444B                             1.7s
 =&amp;gt; =&amp;gt; extracting sha256:0e3e948a6346a46c6ad228463eb896bd63a271b9b122fc90d39a8702b2349746                              1.1s
 =&amp;gt; =&amp;gt; extracting sha256:8bfaf621f0ce4e4ddf5efac2af68a506ee03c6cb9151dd7f9b2b8937b3cfdfb5                              0.0s
 =&amp;gt; =&amp;gt; extracting sha256:8d73ebbcecf2389cf3461a46c9a11972293f334e5093233adb4f1c65ef91e4e0                              0.0s
 =&amp;gt; [node-genai internal] load build context                                                                           0.0s
 =&amp;gt; =&amp;gt; transferring context: 10.20kB                                                                                   0.0s
 =&amp;gt; [go-genai stage-1 2/4] RUN apk --no-cache add ca-certificates                                                      0.8s
 =&amp;gt; [go-genai stage-1 3/4] WORKDIR /root/                                                                              0.1s
 =&amp;gt; [node-genai 2/6] WORKDIR /app                                                                                      0.1s
 =&amp;gt; [node-genai 3/6] COPY package*.json ./                                                                             0.0s
 =&amp;gt; [node-genai 4/6] RUN npm install                                                                                   2.4s
 =&amp;gt; [python-genai 2/7] WORKDIR /app                                                                                    0.0s
 =&amp;gt; [python-genai 3/7] COPY requirements.txt .                                                                         0.0s
 =&amp;gt; [python-genai 4/7] RUN pip install --no-cache-dir -r requirements.txt                                              2.9s
 =&amp;gt; [go-genai builder 2/7] RUN apk add --no-cache git                                                                  0.8s
 =&amp;gt; [go-genai builder 3/7] WORKDIR /app                                                                                0.0s
 =&amp;gt; [go-genai builder 4/7] COPY go.mod ./                                                                              0.0s
 =&amp;gt; [go-genai builder 5/7] RUN go mod download                                                                         0.1s
 =&amp;gt; [go-genai builder 6/7] COPY . .                                                                                    0.0s
 =&amp;gt; [go-genai builder 7/7] RUN CGO_ENABLED=0 GOOS=linux go build -o /go-genai                                          4.7s
 =&amp;gt; [node-genai 5/6] COPY . .                                                                                          0.0s
 =&amp;gt; [node-genai 6/6] RUN mkdir -p views                                                                                0.2s
 =&amp;gt; [node-genai] exporting to image                                                                                    0.1s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                0.1s
 =&amp;gt; =&amp;gt; writing image sha256:05338f427d2bc2449ac87eea0888961c68a93df74ca11b535b1f292a7179b098                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/hello-genai-node-genai                                                              0.0s
 =&amp;gt; [node-genai] resolving provenance for metadata file                                                                0.0s
 =&amp;gt; [python-genai 5/7] COPY . .                                                                                        0.0s
 =&amp;gt; [python-genai 6/7] RUN mkdir -p templates                                                                          0.1s
 =&amp;gt; [python-genai 7/7] COPY templates/index.html templates/                                                            0.0s
 =&amp;gt; [python-genai] exporting to image                                                                                  0.1s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                0.1s
 =&amp;gt; =&amp;gt; writing image sha256:ee3a3acc92d2574fd1ce81ad8e167e2bedb32202b5cbf3cd0e80927797a37893                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/hello-genai-python-genai                                                            0.0s
 =&amp;gt; [python-genai] resolving provenance for metadata file                                                              0.0s
 =&amp;gt; [go-genai stage-1 4/4] COPY --from=builder /go-genai .                                                             0.0s
 =&amp;gt; [go-genai] exporting to image                                                                                      0.1s
 =&amp;gt; =&amp;gt; exporting layers                                                                                                0.1s
 =&amp;gt; =&amp;gt; writing image sha256:7ce7c4d915612db46ca4ee65dc53499a69509a80d8ecbb372366c58a1f6be533                           0.0s
 =&amp;gt; =&amp;gt; naming to docker.io/library/hello-genai-go-genai                                                                0.0s
 =&amp;gt; [go-genai] resolving provenance for metadata file                                                                  0.0s
[+] Running 7/7
 ✔ go-genai                              Built                                                                         0.0s
 ✔ node-genai                            Built                                                                         0.0s
 ✔ python-genai                          Built                                                                         0.0s
 ✔ Network hello-genai_default           Created                                                                       0.0s
 ✔ Container hello-genai-python-genai-1  Started                                                                       0.4s
 ✔ Container hello-genai-node-genai-1    Started                                                                       0.4s
 ✔ Container hello-genai-go-genai-1      Started    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access the App: Visit &lt;a href="http://localhost:8081" rel="noopener noreferrer"&gt;http://localhost:8081&lt;/a&gt; to interact with your AI-powered application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Chat with the Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kw15ov5bhplxf8gmf11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kw15ov5bhplxf8gmf11.png" alt="Image description" width="800" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6: Stop the Application&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/Documents/docker/hello-genai  main  docker-compose down                                               ok  14s  18:17:53
[+] Running 4/4
 ✔ Container hello-genai-go-genai-1      Removed                                                                       0.2s
 ✔ Container hello-genai-python-genai-1  Removed                                                                      10.3s
 ✔ Container hello-genai-node-genai-1    Removed                                                                      10.3s
 ✔ Network hello-genai_default           Removed     
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Docker Model Runner&lt;/strong&gt; makes AI development faster, simpler, and more efficient by bringing AI inference to your local machine. With powerful GPU acceleration and seamless Docker integration, you can build and test GenAI applications with minimal friction.&lt;/p&gt;

&lt;p&gt;Whether you’re an AI researcher, developer, or enthusiast, Docker Model Runner is an invaluable addition to your AI toolkit. Give it a try and experience the future of local AI development.&lt;/p&gt;

&lt;p&gt;Stay tuned for upcoming updates, including Windows support and additional model integrations!&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>docker</category>
    </item>
    <item>
      <title>The Evolution of Computing: From Vacuum Tubes to the Groundbreaking Majorana Chip</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Mon, 24 Feb 2025 08:11:36 +0000</pubDate>
      <link>https://dev.to/cloudgeek7/the-evolution-of-computing-from-vacuum-tubes-to-the-groundbreaking-majorana-chip-ii5</link>
      <guid>https://dev.to/cloudgeek7/the-evolution-of-computing-from-vacuum-tubes-to-the-groundbreaking-majorana-chip-ii5</guid>
      <description>&lt;h2&gt;
  
  
  The Evolution of Computing: From Vacuum Tubes to Majorana Qubits
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4hgyua5p5xuij4ihvp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4hgyua5p5xuij4ihvp8.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Age of Vacuum Tubes (1904–1950s)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A vacuum tube is an electronic device that uses a vacuum as an insulator to shield against electric currents. It works by creating a vacuum inside the tube, which means there is no air or other material inside that could conduct electricity.&lt;/p&gt;

&lt;p&gt;The journey of modern computing began with vacuum tubes, which were large, fragile, and consumed immense power.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtseloz2x1p8rw2qc5i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtseloz2x1p8rw2qc5i0.png" alt="Image description" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1904:&lt;/strong&gt; John Ambrose Fleming invented the first vacuum tube, the thermionic diode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1920s — 1930s:&lt;/strong&gt; The development of triodes (which could amplify signals) revolutionised early electronics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1940s:&lt;/strong&gt; The first general-purpose electronic computers, such as the ENIAC (1945), used thousands of vacuum tubes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges:&lt;/strong&gt; Vacuum tubes were bulky, overheated easily, and were prone to failure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Rise of Transistors (1947–1960s)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To overcome the limitations of vacuum tubes, the transistor was invented, marking a new era of miniaturisation and efficiency.&lt;/p&gt;

&lt;p&gt;A transistor is a miniature semiconductor that regulates or controls current or voltage flow in addition to amplifying and generating these electrical signals and acting as a switch or gate for them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkie97qo0eev11q2el2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhkie97qo0eev11q2el2.png" alt="Image description" width="750" height="284"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1947:&lt;/strong&gt; John Bardeen, Walter Brattain, and William Shockley at Bell Labs invented the first transistor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1950s:&lt;/strong&gt; Transistors started replacing vacuum tubes in computing devices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1958:&lt;/strong&gt; Jack Kilby and Robert Noyce developed the first integrated circuit (IC), paving the way for further miniaturisation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact:&lt;/strong&gt; Computers became smaller, more efficient, and more reliable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Integrated Circuit Revolution (1960s — 1970s)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An integrated circuit (IC), is an assembly of electronic components, fabricated as a single unit, in which miniaturised active devices (e.g., transistors and diodes) and passive devices (e.g., capacitors and resistors) and their interconnections are built up on a thin substrate of semiconductor material (typically silicon).&lt;/p&gt;

&lt;p&gt;The development of ICs enabled the creation of microprocessors, leading to the first personal computers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdf0h85dbo7t4sr1sgp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdf0h85dbo7t4sr1sgp6.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1965:&lt;/strong&gt; Gordon Moore proposed Moore’s Law, predicting the exponential growth of computing power.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1971:&lt;/strong&gt; Intel released the first commercial microprocessor, the Intel 4004.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Late 1970s:&lt;/strong&gt; The rise of personal computing with Apple, IBM, and other companies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact:&lt;/strong&gt; Computing power became accessible to individuals and businesses worldwide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The Silicon Era and Moore’s Law Expansion (1980s — 2000s)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With continued advances in silicon-based semiconductors, computing has become faster, cheaper, and more ubiquitous.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42uoj47fa09uw86cjeu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42uoj47fa09uw86cjeu7.png" alt="Image description" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1980s — 1990s:&lt;/strong&gt; Rise of microcomputers, graphical user interfaces (GUI), and the internet revolution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2000s:&lt;/strong&gt; Multi-core processors and parallel computing became mainstream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrh3t1u41ysuilul2uqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcrh3t1u41ysuilul2uqe.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7tws5x99qs9i87t3f8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7tws5x99qs9i87t3f8b.png" alt="Image description" width="686" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges:&lt;/strong&gt; Moore’s Law showed signs of slowing as transistors approached atomic-scale limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. The Dawn of Quantum Computing (1990s — Present)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With classical computing facing physical limitations, quantum mechanics opened new possibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1994:&lt;/strong&gt; Peter Shor developed Shor’s algorithm, demonstrating quantum computing’s potential to break RSA encryption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1998:&lt;/strong&gt; First experimental demonstration of quantum computing with 2 qubits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2019:&lt;/strong&gt; Google claimed quantum supremacy with Sycamore, performing calculations infeasible for classical computers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r2pun75lv48yebii6x1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r2pun75lv48yebii6x1.png" alt="Image description" width="800" height="1057"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges:&lt;/strong&gt; Quantum decoherence, error correction, and hardware stability remain obstacles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Majorana Qubits: The Future of Quantum Computing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most promising approaches in quantum computing involves Majorana fermions, exotic particles that can enable topological qubits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Milestones:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1937:&lt;/strong&gt; Ettore Majorana proposed the existence of Majorana fermions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2012:&lt;/strong&gt; Microsoft and researchers discovered experimental signatures of Majorana fermions in superconducting nanowires.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2025:&lt;/strong&gt; In a groundbreaking advancement, Microsoft has unveiled the Majorana 1 chip, a quantum processor poised to revolutionise the field of quantum computing. This innovation leverages topological qubits derived from Majorana fermions, aiming to overcome longstanding challenges in qubit stability and scalability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5lgdb7y3ejpjnf2bvfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5lgdb7y3ejpjnf2bvfo.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing the Majorana 1 Chip
&lt;/h2&gt;

&lt;p&gt;The Majorana 1 chip represents a significant milestone in this research trajectory. Utilising a novel Topological Core architecture, this processor is designed to house up to one million qubits on a single, palm-sized chip. This scalability is crucial for tackling complex, real-world problems that are beyond the capabilities of classical computers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Science Behind Topological Qubits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Topological qubits gain their resilience from the unique properties of Majorana fermions. By manipulating these particles through processes known as “braiding,” quantum information can be encoded in a manner that is inherently resistant to errors caused by local perturbations. This method stands in contrast to traditional qubits, which require extensive error correction protocols.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implications and Future Prospects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The development of the Majorana 1 chip is poised to accelerate the realisation of practical quantum computing applications. Potential impacts span various industries, including cryptography, material science, and complex system modelling. While challenges remain in scaling and integrating this technology, Microsoft’s breakthrough offers a promising pathway toward quantum systems capable of addressing problems once deemed intractable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;From vacuum tubes to transistors, from silicon chips to quantum bits, computing has undergone a &lt;strong&gt;massive transformation&lt;/strong&gt;. The next era, powered by &lt;strong&gt;Majorana qubits&lt;/strong&gt;, could unlock unprecedented computational capabilities — ushering in a &lt;strong&gt;new age of quantum supremacy and secure cryptographic systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s next?&lt;/strong&gt; The race for practical quantum computing is on, and the future is closer than ever.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>quantumcomputing</category>
    </item>
    <item>
      <title>Self-Hosting AI Workflows on Mac: DeepSeek-R1 + Ollama + n8n powered by Docker</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Thu, 13 Feb 2025 08:28:27 +0000</pubDate>
      <link>https://dev.to/docker/self-hosting-ai-workflows-on-mac-deepseek-r1-ollama-n8n-powered-by-docker-4a9k</link>
      <guid>https://dev.to/docker/self-hosting-ai-workflows-on-mac-deepseek-r1-ollama-n8n-powered-by-docker-4a9k</guid>
      <description>&lt;p&gt;I’ll walk through the process of setting up a local environment on a MacBook Pro M3 (18GB) to test the DeepSeek model using Ollama, n8n, and Docker via Docker Desktop. &lt;strong&gt;This setup is suitable ONLY for performing PoCs and development environments.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we begin, make sure you have the following (or similar setup):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;MacBook Pro M3 with 18GB RAM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker installed (via Docker desktop - &lt;a href="https://www.docker.com/products/docker-desktop" rel="noopener noreferrer"&gt;https://www.docker.com/products/docker-desktop&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ollama installed (&lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;https://ollama.com&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;n8n installed (&lt;a href="https://docs.n8n.io/hosting" rel="noopener noreferrer"&gt;https://docs.n8n.io/hosting&lt;/a&gt; — for automating workflows, we’ll use this to test DeepSeek)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up DeepSeek-R1 with Ollama
&lt;/h2&gt;

&lt;p&gt;DeepSeek-R1 is a powerful open-source AI model optimized for complex reasoning, coding, mathematics, and problem-solving. Running it locally ensures privacy, performance, and customization.&lt;/p&gt;

&lt;p&gt;Let’s dive into the steps!&lt;br&gt;
&lt;strong&gt;Step:1:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;1: Install n8n Using Docker Compose&lt;/strong&gt;&lt;br&gt;
We’ll use Docker Compose to spin up n8n.&lt;/p&gt;

&lt;p&gt;Create a folder for the n8n setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/n8n-docker &amp;amp;&amp;amp; cd ~/n8n-docker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create a simple docker-compose.yml file for n8n:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    ports:
      - "5678:5678"
    volumes:
      - ./n8n_data:/home/node/.n8n # Persistent storage for workflows and data
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Start the n8n service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Open n8n in your browser: &lt;a href="http://localhost:5678" rel="noopener noreferrer"&gt;http://localhost:5678&lt;/a&gt;. Setup an owner account and Log in using the credentials.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88txda6yggbotuug2o3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88txda6yggbotuug2o3w.png" alt="Image description" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can request a free n8n license key (emailed to you) to unlock selected paid features on your community edition of n8n. You can enter and activate your key as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeql86gwriciwd1rvdce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeql86gwriciwd1rvdce.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Install Ollama&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/Doc/AI-Projects  brew install ollama                     
Warning: ollama 0.5.7 is already installed and up-to-date.
To reinstall 0.5.7, run:
  brew reinstall ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Download the DeepSeek 7B model using Ollama:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Doc/AI-Projects  ollama run deepseek-r1:7b                                    
pulling manifest
pulling 96c415656d37... 100% ▕████████████████▏ 4.7 GB
pulling 369ca498f347... 100% ▕████████████████▏  387 B
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B
pulling 40fb844194b2... 100% ▕████████████████▏  487 B
verifying sha256 digest
writing manifest
success
&amp;gt;&amp;gt;&amp;gt; /bye

 ~/Doc/AI-Projects  ollama list                                  
NAME              ID              SIZE      MODIFIED
deepseek-r1:7b    0a8c26691023    4.7 GB    26 hours ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Integrate DeepSeek with n8n&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s set up a workflow in n8n to interact with the DeepSeek model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.1: Create Chat Trigger, Basic LLM Chain and Ollama Chat Model nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wgbxf39y8qihgyrgmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wgbxf39y8qihgyrgmw.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2: Configure Ollama Chat Model and test the connectivity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Double-click on the Ollama Chat Model node and configure the Ollama Base URL.&lt;/p&gt;

&lt;p&gt;Instead of using localhost, set it to host.docker.internal with the appropriate Ollama port number (11434). Using localhost will result in a connection refused error, as Docker containers require host.docker.internal to access services running on the host machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3iplmwo2w7es51743t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3iplmwo2w7es51743t7.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the connection is tested successfully, select the Deepseek model we have downloaded deepseek-r1:7b under parameters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9ydk8epouzop6u2aw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9ydk8epouzop6u2aw7.png" alt="Image description" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Test DeepSeek 7B Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that everything is set up:&lt;/p&gt;

&lt;p&gt;Test more complex queries using the DeepSeek 7B model.&lt;br&gt;
Create workflows in n8n to automate LLM-based tasks, such as summarisation, Q&amp;amp;A, or sentiment analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple test:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaalwuzcz9iuwdwxbn2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaalwuzcz9iuwdwxbn2f.png" alt="Image description" width="800" height="969"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Madhu: who are you

&amp;lt;think&amp;gt;

&amp;lt;/think&amp;gt;

Greetings! I'm DeepSeek-R1, an artificial intelligence assistant created by DeepSeek. I'm at your service and would be delighted to assist you with any inquiries or tasks you may have.



Madhu: create terraform code to deploy eks cluster in aws


&amp;lt;think&amp;gt; Okay, I need to create a Terraform code to deploy an EKS cluster on AWS. Let me think about how to approach this.

First, I should outline the main components needed for deploying EKS using AWS. The key parts are the master node and worker nodes. So, I'll probably use Terraform modules like ec2 for general EC2 setup, ebs for adding storage, rds for the database, elb for the load balancer, and autoscaling groups to manage scaling.

I remember that EKS requires a cluster name which should be unique, so using a random string would make sense. I'll include a list of required modules at the top of my file to ensure they're loaded first.

Next, I need to define resources for each node type. The master will have an instance type like t2.micro, and workers can use r5.micro with a higher vpc_id for better performance since workers are in the same subnet as the master.

For storage, using ProvisionedS3Bucket makes sense because it's easy to manage and provides persistent storage. I'll set up both a default bucket and maybe an object store for EKS secrets or logs.

The database should be RDS, so I'll configure a MySQL or PostgreSQL instance in the same subnet as the master for consistency.

The load balancer is crucial for traffic distribution. I'll create an ELB that directs traffic to the EKS cluster nodes. The LB needs to be in the same VPC and Security Group as the master to ensure all nodes are reachable.

Autoscaling groups will help manage scaling based on load and temperature. I'll set up two groups, one for scaling out and another for scaling back, with appropriate policies so they don't interfere with each other.

I should also include environment variables in the Infrastructure Block for easy deployment from different environments like dev, staging, or prod.

It's important to note that EKS requires specific IAM roles for each resource. I might need to create these roles in a separate Terraform file or handle them within this code using assume_iam_role statements.

Wait, does the worker node need its own storage? Maybe using an S3 bucket per node isn't efficient, so perhaps using a shared storage solution like EFS or a persistent volume would be better. Or maybe I can use the same storage as the master for simplicity since they're in the same subnet.

I'm also thinking about security groups. They need to allow SSH access and connect all nodes (including ELB) to each other. So, I'll define a security group that includes both nodes and the ELB.

Another consideration is network interfaces. Each node's VPC ID should match so they're in the same subnet for communication.

I should make sure all necessary modules are imported at the top of the file, like module 'ec2', 'ebs', etc., to avoid any undefined variable errors.

Also, I need to define a list of parameter values that can be overridden or changed between environments. This way, each deployment environment can have its own configuration without rebuilding the entire code.

Putting it all together, I'll structure the code with modules at the top, then define variables and resources in order. Each node type (master and worker) will be defined using their respective modules. The EKS cluster will include all these components plus the autoscaling groups for scaling management.

I should also ensure that the output includes all necessary information like cluster name, master IP, etc., so users can easily deploy without manual input.

Finally, I'll include a note about how to apply the changes and test the deployment, as well as mentioning the IAM roles required. This will help the user understand what steps to take next after running the Terraform code. &amp;lt;/think&amp;gt;

To deploy an EKS cluster using Terraform on AWS, you can follow these steps with the provided code:

Define Required Modules: Ensure all necessary modules are imported first.

Set Up Variables: Create a variable list for deployment configuration.

Define Resources:

Master Node
Worker Nodes
Storage Solutions (S3, RDS)
Load Balancer
Autoscaling Groups
Create Infrastructure Block: Manage environment-specific configurations.

Define Tags and Output: Include useful metadata for deployment.

Write Terraform Code:

# Top of the file with required modules
module "ec2"
  define "parameters" = [
    "region",
    "cluster_name",
    "vpc_id",
    "security_group",
    "KeyName", 
    "efs_size",
    "s3_bucket_name",
    "rds_db_name",
    "master BootTime",
    "worker BootTime"
  ]

# Infrastructure configuration

module = "ec2"

var Parameters = parameters()

# Define CloudFormation stack variables
var Region = Parameters.region
var ClusterName = Parameters.cluster_name
var VpcId = Parameters.vpc_id
var SecurityGroup = Parameters.security_group
var KeyName = Parameters.KeyName
var EfSSize = Parameters.efs_size
var S3BucketName = Parameters.s3_bucket_name
var RdsDbName = Parameters.rds_db_name

var Master = "t2.micro"
var Worker = "r5.micro"

# Define storage solutions
var StorageSolution = {
  type: "provisioned_s3_bucket",
  bucket_name: S3BucketName,
  region: Region
}

var EksClusterStorage = {
  type: "efs",
  root_size_in_gigabytes: EfSSize,
  region: Region
}

# Define database solution
var Rds = {
  type: "rds",
  engine: "mysql",
  name: RdsDbName,
  region: Region,
  key_name: KeyName,
  storage_type: "persistent",
  availability_zone: VpcId[0].Availability_Zone
}

# Define load balancer configuration
var LoadBalancer = {
  type: "elastic_load_balancer",
  name: ClusterName LoadBalancer,
  region: Region,
  load_balancer subnet_id: VpcId[0].Subnet_id,
  security_groups: [SecurityGroup],
  inboundrules: [{
    destination_port: 80,
    protocol: "TCP"
  }]
}

# Define autoscaling groups
var AutoscalingOut = {
  type: "autoscaling",
  scaling_out enabled: true,
  min_count: 2,
  max_count: 6,
  policy: { target_arn: "arn:aws:elasticloadbalancing:Region::app:elb/LoadBalancer/Status" }
}

var AutoscalingIn = {
  type: "autoscaling",
  scaling_in enabled: true,
  min_count: 2,
  max_count: 6,
  policy: { target_arn: "arn:aws:elasticloadbalancing:Region::app:elb/LoadBalancer/Status" }
}

# Apply changes
apply {
  # Define the master node
  ec2.master(InstanceType = Master, KeyName = KeyName, AvailabilityZone = VpcId[0].Availability_Zone,
    SecurityGroupIds = [SecurityGroup], SubnetId = VpcId[0].Subnet_id, 
    Storage = StorageSolution, EnableNetworking: true)

# Define the worker nodes
  map "workers" {
    Worker,
    KeyName = KeyName,
    AvailabilityZone = VpcId[0].Availability_Zone,
    SecurityGroupIds = [SecurityGroup],
    SubnetId = VpcId[0].Subnet_id,
    Storage = StorageSolution,
    EnableNetworking: true
  }

# Define the EKS cluster with master and worker nodes
  eks(Storage = EksClusterStorage, LoadBalancer = LoadBalancer, AutoscalingGroups = [AutoscalingOut, AutoscalingIn])

# Infrastructure blocks
  infrastructureBlock "parameters" {
    parameters
  }

  infrastructureBlock "environment" {
    environment = Parameters.environment
  }
}

# Output information for deployment confirmation
output "cluster_name" = ClusterName
output "master_ip" = ec2.master.ip_address
output "workers_count" = (number of workers)
This code provides a comprehensive configuration to deploy an EKS cluster using Terraform on AWS. It includes:

Master Node: Uses a T2.micro instance with RDS for storage and networking.
Worker Nodes: Utilizes R5.micro instances, providing better performance compared to master nodes.
Storage: Both S3 and EFS are configured to provide redundancy and in-cluster storage solutions.
Load Balancer: Ensures traffic is distributed efficiently among all cluster nodes.
Autoscaling Groups: Manages scaling based on load and temperature, with two groups for scaling out/in.
To use this code:

Ensure Terraform Installation: Confirm that Terraform is installed and configured correctly.
Define Parameters: Create a YAML configuration file for your specific environment (dev, staging, prod).
Apply Changes: Use terraform apply to deploy the EKS cluster based on your configurations.
Note: The code assumes you have IAM roles set up with appropriate permissions for EKS operations, including RDS, S3, and ELB access. Ensure these roles are available before running the deployment.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The DeepSeek R1 LLM stands out for its ability to run locally within a &lt;strong&gt;Docker-powered environment&lt;/strong&gt;, ensuring a privacy-first, efficient, and cost-effective alternative to cloud-based LLMs. By leveraging &lt;strong&gt;Docker&lt;/strong&gt; alongside &lt;strong&gt;Ollama&lt;/strong&gt; and &lt;strong&gt;n8n&lt;/strong&gt;, &lt;strong&gt;DeepSeek&lt;/strong&gt; offers a seamless workflow for testing and automation without the need for extensive infrastructure or internet connectivity.&lt;/p&gt;

&lt;p&gt;Feel free to experiment with different workflows and use cases!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learning continues.!&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>docker</category>
    </item>
    <item>
      <title>The AI Game-Changer: How to Deploy DeepSeek-R1 Locally on a MacBook Pro M3</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sun, 26 Jan 2025 21:43:50 +0000</pubDate>
      <link>https://dev.to/cloudgeek7/the-ai-game-changer-running-deepseek-r1-locally-on-a-macbook-pro-m3-4973</link>
      <guid>https://dev.to/cloudgeek7/the-ai-game-changer-running-deepseek-r1-locally-on-a-macbook-pro-m3-4973</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u4wv2k4he2zg34p6k88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u4wv2k4he2zg34p6k88.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve just come across DeepSeek R1, and I’m absolutely excited about it! For those who aren’t familiar, it’s a groundbreaking new open-source AI model that rivals the likes of OpenAI’s GPT-4 and Claude 3.5 Sonnet in math, coding, and reasoning tasks.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The best part? You can run it directly on your machine (check pre-requisites), with full privacy and at absolutely no cost!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is deepseek-r1?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wovzqa3kukia6b4e5l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wovzqa3kukia6b4e5l8.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DeepSeek’s first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.&lt;/p&gt;

&lt;p&gt;On January 20, 2025, the DeepSeek-R1 and DeepSeek-R1-Zero were released. They were based on V3-Base. Like V3, each is a mixture of experts with 671B total parameters and 37B activated parameters. They also released some “DeepSeek-R1-Distill” models, which are not based on R1. Instead, they are similar to other open-weight models like LLaMA and Qwen, fine-tuned on synthetic data generated by R1. The model I have installed and tested is &lt;strong&gt;DeepSeek R1 Distill Qwen 7Billion parameters.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;I’ll walk through the process of setting up a local environment on a MacBook Pro M3 (18GB) for testing the DeepSeek model using Ollama, n8n, and Docker via OrbStack. If you’re looking to get hands-on with an LLM like DeepSeek, follow these steps carefully to get your system set up. This setup is suitable ONLY for performing PoCs and development environment.&lt;/p&gt;

&lt;p&gt;Before we begin, make sure you have the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;MacBook Pro M3 with 18GB RAM&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker installed (via OrbStack - &lt;a href="https://orbstack.dev/download" rel="noopener noreferrer"&gt;https://orbstack.dev/download&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ollama installed (&lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;https://ollama.com&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;n8n installed (&lt;a href="https://docs.n8n.io/hosting" rel="noopener noreferrer"&gt;https://docs.n8n.io/hosting&lt;/a&gt; — for automating workflows, we’ll use this to test DeepSeek)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terminal Access (iTerm2 — &lt;a href="https://iterm2.com" rel="noopener noreferrer"&gt;https://iterm2.com&lt;/a&gt; — for installing deepseek LLM via ollama)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s dive into the steps!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install n8n Using Docker Compose on OrbStack&lt;/strong&gt;&lt;br&gt;
We’ll use Docker Compose to spin up n8n on OrbStack.&lt;/p&gt;

&lt;p&gt;Create a folder for the n8n setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir ~/n8n-docker &amp;amp;&amp;amp; cd ~/n8n-docker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Create a simple docker-compose.yml file for n8n:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.9'

services:
  n8n:
    image: n8nio/n8n:latest
    container_name: n8n
    ports:
      - "5678:5678"
    volumes:
      - ./n8n_data:/home/node/.n8n # Persistent storage for workflows and data
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Start the n8n service:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check if n8n is running via OrbStack&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dxigzujablmr9iq80x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dxigzujablmr9iq80x7.png" alt="Image description" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Open n8n in your browser: &lt;a href="http://localhost:5678" rel="noopener noreferrer"&gt;http://localhost:5678&lt;/a&gt;. Setup an owner account and Log in using the credentials.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88txda6yggbotuug2o3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88txda6yggbotuug2o3w.png" alt="Image description" width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can request free n8n license key (emailed to you) to unlock selected paid features on your community edition of n8n. You can enter and activate your key as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeql86gwriciwd1rvdce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feeql86gwriciwd1rvdce.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Install Ollama&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/Doc/AI-Projects  brew install ollama                     
Warning: ollama 0.5.7 is already installed and up-to-date.
To reinstall 0.5.7, run:
  brew reinstall ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Download the DeepSeek 7B model using Ollama:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Doc/AI-Projects  ollama run deepseek-r1:7b                                    
pulling manifest
pulling 96c415656d37... 100% ▕████████████████▏ 4.7 GB
pulling 369ca498f347... 100% ▕████████████████▏  387 B
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B
pulling 40fb844194b2... 100% ▕████████████████▏  487 B
verifying sha256 digest
writing manifest
success
&amp;gt;&amp;gt;&amp;gt; /bye

 ~/Doc/AI-Projects  ollama list                                  
NAME              ID              SIZE      MODIFIED
deepseek-r1:7b    0a8c26691023    4.7 GB    26 hours ago
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Integrate DeepSeek with n8n&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s set up a workflow in n8n to interact with the DeepSeek model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.1: Create Chat Trigger, Basic LLM Chain and Ollama Chat Model nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wgbxf39y8qihgyrgmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4wgbxf39y8qihgyrgmw.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4.2: Configure Ollama Chat Model and test the connectivity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Double-click on the Ollama Chat Model node and configure the Ollama Base URL.&lt;/p&gt;

&lt;p&gt;Instead of using localhost, set it to host.docker.internal with the appropriate Ollama port number (11434). Using localhost will result in a connection refused error, as Docker containers require host.docker.internal to access services running on the host machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3iplmwo2w7es51743t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3iplmwo2w7es51743t7.png" alt="Image description" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the connection is tested successfully, select the Deepseek model we have downloaded deepseek-r1:7b under parameters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9ydk8epouzop6u2aw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhd9ydk8epouzop6u2aw7.png" alt="Image description" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Test DeepSeek 7B Model&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now that everything is set up:&lt;/p&gt;

&lt;p&gt;Test more complex queries using the DeepSeek 7B model.&lt;br&gt;
Create workflows in n8n to automate LLM-based tasks, such as summarisation, Q&amp;amp;A, or sentiment analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple test:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaalwuzcz9iuwdwxbn2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaalwuzcz9iuwdwxbn2f.png" alt="Image description" width="800" height="969"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Madhu: who are you

&amp;lt;think&amp;gt;

&amp;lt;/think&amp;gt;

Greetings! I'm DeepSeek-R1, an artificial intelligence assistant created by DeepSeek. I'm at your service and would be delighted to assist you with any inquiries or tasks you may have.



Madhu: create terraform code to deploy eks cluster in aws


&amp;lt;think&amp;gt; Okay, I need to create a Terraform code to deploy an EKS cluster on AWS. Let me think about how to approach this.

First, I should outline the main components needed for deploying EKS using AWS. The key parts are the master node and worker nodes. So, I'll probably use Terraform modules like ec2 for general EC2 setup, ebs for adding storage, rds for the database, elb for the load balancer, and autoscaling groups to manage scaling.

I remember that EKS requires a cluster name which should be unique, so using a random string would make sense. I'll include a list of required modules at the top of my file to ensure they're loaded first.

Next, I need to define resources for each node type. The master will have an instance type like t2.micro, and workers can use r5.micro with a higher vpc_id for better performance since workers are in the same subnet as the master.

For storage, using ProvisionedS3Bucket makes sense because it's easy to manage and provides persistent storage. I'll set up both a default bucket and maybe an object store for EKS secrets or logs.

The database should be RDS, so I'll configure a MySQL or PostgreSQL instance in the same subnet as the master for consistency.

The load balancer is crucial for traffic distribution. I'll create an ELB that directs traffic to the EKS cluster nodes. The LB needs to be in the same VPC and Security Group as the master to ensure all nodes are reachable.

Autoscaling groups will help manage scaling based on load and temperature. I'll set up two groups, one for scaling out and another for scaling back, with appropriate policies so they don't interfere with each other.

I should also include environment variables in the Infrastructure Block for easy deployment from different environments like dev, staging, or prod.

It's important to note that EKS requires specific IAM roles for each resource. I might need to create these roles in a separate Terraform file or handle them within this code using assume_iam_role statements.

Wait, does the worker node need its own storage? Maybe using an S3 bucket per node isn't efficient, so perhaps using a shared storage solution like EFS or a persistent volume would be better. Or maybe I can use the same storage as the master for simplicity since they're in the same subnet.

I'm also thinking about security groups. They need to allow SSH access and connect all nodes (including ELB) to each other. So, I'll define a security group that includes both nodes and the ELB.

Another consideration is network interfaces. Each node's VPC ID should match so they're in the same subnet for communication.

I should make sure all necessary modules are imported at the top of the file, like module 'ec2', 'ebs', etc., to avoid any undefined variable errors.

Also, I need to define a list of parameter values that can be overridden or changed between environments. This way, each deployment environment can have its own configuration without rebuilding the entire code.

Putting it all together, I'll structure the code with modules at the top, then define variables and resources in order. Each node type (master and worker) will be defined using their respective modules. The EKS cluster will include all these components plus the autoscaling groups for scaling management.

I should also ensure that the output includes all necessary information like cluster name, master IP, etc., so users can easily deploy without manual input.

Finally, I'll include a note about how to apply the changes and test the deployment, as well as mentioning the IAM roles required. This will help the user understand what steps to take next after running the Terraform code. &amp;lt;/think&amp;gt;

To deploy an EKS cluster using Terraform on AWS, you can follow these steps with the provided code:

Define Required Modules: Ensure all necessary modules are imported first.

Set Up Variables: Create a variable list for deployment configuration.

Define Resources:

Master Node
Worker Nodes
Storage Solutions (S3, RDS)
Load Balancer
Autoscaling Groups
Create Infrastructure Block: Manage environment-specific configurations.

Define Tags and Output: Include useful metadata for deployment.

Write Terraform Code:

# Top of the file with required modules
module "ec2"
  define "parameters" = [
    "region",
    "cluster_name",
    "vpc_id",
    "security_group",
    "KeyName", 
    "efs_size",
    "s3_bucket_name",
    "rds_db_name",
    "master BootTime",
    "worker BootTime"
  ]

# Infrastructure configuration

module = "ec2"

var Parameters = parameters()

# Define CloudFormation stack variables
var Region = Parameters.region
var ClusterName = Parameters.cluster_name
var VpcId = Parameters.vpc_id
var SecurityGroup = Parameters.security_group
var KeyName = Parameters.KeyName
var EfSSize = Parameters.efs_size
var S3BucketName = Parameters.s3_bucket_name
var RdsDbName = Parameters.rds_db_name

var Master = "t2.micro"
var Worker = "r5.micro"

# Define storage solutions
var StorageSolution = {
  type: "provisioned_s3_bucket",
  bucket_name: S3BucketName,
  region: Region
}

var EksClusterStorage = {
  type: "efs",
  root_size_in_gigabytes: EfSSize,
  region: Region
}

# Define database solution
var Rds = {
  type: "rds",
  engine: "mysql",
  name: RdsDbName,
  region: Region,
  key_name: KeyName,
  storage_type: "persistent",
  availability_zone: VpcId[0].Availability_Zone
}

# Define load balancer configuration
var LoadBalancer = {
  type: "elastic_load_balancer",
  name: ClusterName LoadBalancer,
  region: Region,
  load_balancer subnet_id: VpcId[0].Subnet_id,
  security_groups: [SecurityGroup],
  inboundrules: [{
    destination_port: 80,
    protocol: "TCP"
  }]
}

# Define autoscaling groups
var AutoscalingOut = {
  type: "autoscaling",
  scaling_out enabled: true,
  min_count: 2,
  max_count: 6,
  policy: { target_arn: "arn:aws:elasticloadbalancing:Region::app:elb/LoadBalancer/Status" }
}

var AutoscalingIn = {
  type: "autoscaling",
  scaling_in enabled: true,
  min_count: 2,
  max_count: 6,
  policy: { target_arn: "arn:aws:elasticloadbalancing:Region::app:elb/LoadBalancer/Status" }
}

# Apply changes
apply {
  # Define the master node
  ec2.master(InstanceType = Master, KeyName = KeyName, AvailabilityZone = VpcId[0].Availability_Zone,
    SecurityGroupIds = [SecurityGroup], SubnetId = VpcId[0].Subnet_id, 
    Storage = StorageSolution, EnableNetworking: true)

# Define the worker nodes
  map "workers" {
    Worker,
    KeyName = KeyName,
    AvailabilityZone = VpcId[0].Availability_Zone,
    SecurityGroupIds = [SecurityGroup],
    SubnetId = VpcId[0].Subnet_id,
    Storage = StorageSolution,
    EnableNetworking: true
  }

# Define the EKS cluster with master and worker nodes
  eks(Storage = EksClusterStorage, LoadBalancer = LoadBalancer, AutoscalingGroups = [AutoscalingOut, AutoscalingIn])

# Infrastructure blocks
  infrastructureBlock "parameters" {
    parameters
  }

  infrastructureBlock "environment" {
    environment = Parameters.environment
  }
}

# Output information for deployment confirmation
output "cluster_name" = ClusterName
output "master_ip" = ec2.master.ip_address
output "workers_count" = (number of workers)
This code provides a comprehensive configuration to deploy an EKS cluster using Terraform on AWS. It includes:

Master Node: Uses a T2.micro instance with RDS for storage and networking.
Worker Nodes: Utilizes R5.micro instances, providing better performance compared to master nodes.
Storage: Both S3 and EFS are configured to provide redundancy and in-cluster storage solutions.
Load Balancer: Ensures traffic is distributed efficiently among all cluster nodes.
Autoscaling Groups: Manages scaling based on load and temperature, with two groups for scaling out/in.
To use this code:

Ensure Terraform Installation: Confirm that Terraform is installed and configured correctly.
Define Parameters: Create a YAML configuration file for your specific environment (dev, staging, prod).
Apply Changes: Use terraform apply to deploy the EKS cluster based on your configurations.
Note: The code assumes you have IAM roles set up with appropriate permissions for EKS operations, including RDS, S3, and ELB access. Ensure these roles are available before running the deployment.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The DeepSeek R1 LLM stands out for its ability to run locally, offering a privacy-first, efficient, and cost-effective solution compared to other cloud-based LLMs. Unlike most models that require significant infrastructure or internet connectivity, DeepSeek — especially when paired with tools like n8n and Ollama — provides a streamlined workflow for testing locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcbf58t62ts1d8qnzwfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxcbf58t62ts1d8qnzwfp.png" alt="Image description" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What sets DeepSeek apart is its focus on lightweight deployment without compromising performance, making it an ideal choice for developers looking to test models locally or build workflows tailored to specific use cases. While other LLMs might offer higher-scale models or extensive APIs, they often come with added costs, latency, and reliance on external infrastructure. DeepSeek bridges this gap by delivering powerful local processing while maintaining flexibility and ease of integration.&lt;/p&gt;

&lt;p&gt;Feel free to experiment with different workflows and use cases!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Learning continues.!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7irm7ecqw7pv52o8c2tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7irm7ecqw7pv52o8c2tn.png" alt="Image description" width="570" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/deepseek-ai/DeepSeek-R1" rel="noopener noreferrer"&gt;https://github.com/deepseek-ai/DeepSeek-R1&lt;/a&gt;&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/DeepSeek" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/DeepSeek&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.deepseek.com" rel="noopener noreferrer"&gt;https://www.deepseek.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deepseek</category>
    </item>
    <item>
      <title>The Future of AI: Agent-Driven Paradigms and Transformations in Software Development</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Tue, 31 Dec 2024 11:45:15 +0000</pubDate>
      <link>https://dev.to/aws-heroes/the-future-of-ai-agent-driven-paradigms-and-transformations-in-software-development-55l</link>
      <guid>https://dev.to/aws-heroes/the-future-of-ai-agent-driven-paradigms-and-transformations-in-software-development-55l</guid>
      <description>&lt;p&gt;🌟 As 2024 Comes to a Close: A Wake-Up Call for Developers 🌟&lt;/p&gt;

&lt;p&gt;As we approach the close of 2024 and step into 2025, I want to &lt;strong&gt;caution developer communities&lt;/strong&gt; about the profound transformation taking place in the software development landscape. The rise of agent-driven technologies and generative AI tools is accelerating automation at an unprecedented rate, redefining what it means to be a developer. While this brings opportunities for enhanced productivity, it also demands a &lt;strong&gt;shift in skillsets&lt;/strong&gt; — moving away from routine coding tasks toward mastering AI collaboration, ethical AI governance, and high-level system design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developers who fail to adapt risk being left behind&lt;/strong&gt; in a world where natural language interfaces, autonomous debugging, and self-healing systems are becoming the new norm. The era of creative, strategy-driven software engineering is upon us, and embracing these changes is no longer optional — &lt;strong&gt;it’s essential for staying relevant.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flc1cvq359lr9l6unl6ie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flc1cvq359lr9l6unl6ie.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rapid evolution of artificial intelligence (AI) has profoundly influenced countless industries, with software development being one of the most dynamic arenas of change. As we move toward an AI-driven future, intelligent agents — autonomous systems that perceive, reason, and act — are emerging as game-changers in software design, development, and deployment.&lt;/p&gt;

&lt;p&gt;I want to briefly discuss the rise of AI agents, their potential to reshape the development process, and how this paradigm shift will influence the future of software engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are AI Agents?&lt;/strong&gt;&lt;br&gt;
AI agents are programs that autonomously perform tasks by leveraging capabilities such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Perception&lt;/strong&gt;: Interpreting data from the environment (e.g., sensor inputs, user commands).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning&lt;/strong&gt;: Making decisions based on goals, constraints, and contexts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acting&lt;/strong&gt;: Executing tasks with minimal or no human intervention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning&lt;/strong&gt;: Continuously improving performance by analyzing past outcomes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Agents as Co-Developers&lt;/strong&gt;&lt;br&gt;
The most notable evolution of agents in software development is their role as co-developers or even autonomous developers. AI tools such as GitHub Copilot, ChatGPT, etc already demonstrate this concept, but future agents are expected to take it further by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Designing architectures.&lt;/li&gt;
&lt;li&gt;Debugging complex systems autonomously.&lt;/li&gt;
&lt;li&gt;Refactoring legacy code without human input.&lt;/li&gt;
&lt;li&gt;Collaborating with teams as virtual peers, capable of participating in standups and sprint planning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme26420r4uozp4ll97hn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fme26420r4uozp4ll97hn.png" alt="Image description" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent-Driven Software Development Paradigms
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. The Rise of Agent-Orchestrated Pipelines&lt;/strong&gt;&lt;br&gt;
In modern DevOps, automation is key. AI agents will further streamline Continuous Integration/Continuous Deployment (CI/CD) pipelines by:&lt;/p&gt;

&lt;p&gt;Dynamically adapting configurations for optimal performance.&lt;br&gt;
Automating rollbacks and recovery with predictive analysis.&lt;br&gt;
Orchestrating microservices in real-time to adapt to changing workloads.&lt;br&gt;
This will reduce the reliance on manual monitoring and intervention, empowering developers to focus on innovation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Natural Language-Centric Development&lt;/strong&gt;&lt;br&gt;
The future of coding may no longer require extensive syntax knowledge. With natural language interfaces, developers will communicate with agents to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generate entire applications from high-level descriptions.&lt;/li&gt;
&lt;li&gt;Add features by specifying functionality in plain language.&lt;/li&gt;
&lt;li&gt;Perform advanced queries like “Optimise my database schema for this traffic pattern.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Code-Generation Agents&lt;/strong&gt;&lt;br&gt;
Generative AI has already shown its potential in creating boilerplate code. The next leap is enabling agents to write domain-specific applications tailored to specific business requirements, even in industries with stringent compliance standards like finance, healthcare, or aviation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Self-Healing Systems&lt;/strong&gt;&lt;br&gt;
AI agents will monitor systems in production, identifying issues, generating patches, and deploying fixes autonomously. This approach significantly minimises downtime and enhances reliability, reducing operational costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Democratisation of Development&lt;/strong&gt;&lt;br&gt;
Agent-driven paradigms will enable non-developers to build applications, lowering the barrier to entry. No-code/low-code platforms will evolve into agent-powered environments, where users can define complex workflows or applications without ever writing a line of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Impact on the Role of Software Developers
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha7ps2xg9w3xrukzadfm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fha7ps2xg9w3xrukzadfm.png" alt="Image description" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While AI agents will take over repetitive and resource-intensive tasks, developers will transition into roles that emphasise creativity, strategy, and oversight. Here’s how the role will shift:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Focus on High-Level Design&lt;/strong&gt;&lt;br&gt;
Developers will become system architects, focusing on designing modular and scalable solutions while agents handle implementation details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Emphasis on Ethics and Governance&lt;/strong&gt;&lt;br&gt;
With AI taking centre stage, developers must ensure that agents adhere to ethical guidelines, fairness, and security standards.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Collaboration with AI Agents&lt;/strong&gt;&lt;br&gt;
Rather than replacing developers, agents will work alongside them. The ability to effectively manage and communicate with agents will become a critical skill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Enhanced Productivity&lt;/strong&gt;&lt;br&gt;
By automating mundane tasks, agents will free developers to focus on creative problem-solving and long-term innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Challenges and Risks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwabfzqkimg2aw9mo02m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkwabfzqkimg2aw9mo02m.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While the potential benefits of agent-driven development are immense, there are significant challenges to consider:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Trust and Transparency&lt;/strong&gt;&lt;br&gt;
How do we ensure that AI agents make ethical and reliable decisions? This includes understanding how they generate code or refactor systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Skills Gap&lt;/strong&gt;&lt;br&gt;
Developers will need new skill sets to work effectively with AI agents, such as prompt engineering, machine learning fundamentals, and human-AI collaboration techniques.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Security Vulnerabilities&lt;/strong&gt;&lt;br&gt;
Agents, if not properly managed, could introduce vulnerabilities into systems. Rigorous validation and monitoring processes will be essential.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Job Displacement&lt;/strong&gt;&lt;br&gt;
The democratisation of software development could displace some traditional roles. However, it will also create new opportunities in AI engineering, governance, and management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Trends in AI and Software Development
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hr3ebfw00hj1g6ukpag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hr3ebfw00hj1g6ukpag.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking ahead, I anticipate several transformative trends:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Autonomous DevOps&lt;/strong&gt;&lt;br&gt;
AI agents will completely manage infrastructure provisioning, scaling, and monitoring, paving the way for self-operating clouds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Hyper-Personalisation&lt;/strong&gt;&lt;br&gt;
Agents will enable developers to create ultra-customised software tailored to individual user needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. AI-First Architectures&lt;/strong&gt;&lt;br&gt;
Applications will be designed with agents as primary actors, orchestrating workflows, interactions, and decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Real-Time Collaboration Across Geographies&lt;/strong&gt;&lt;br&gt;
Agent-driven development will facilitate real-time collaboration, allowing globally distributed teams to work seamlessly across time zones and languages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fila9p0z3p0wtafa746u2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fila9p0z3p0wtafa746u2.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The future of software development is undeniably agent-driven.&lt;/strong&gt; While challenges exist, the opportunities to enhance productivity, democratise innovation, and create intelligent systems are unparalleled. Developers must embrace this shift by upskilling, collaborating with agents, and focusing on creativity and ethical oversight.&lt;/p&gt;

&lt;p&gt;As AI agents become more sophisticated, the way we think about building software will fundamentally change — shifting from writing code to defining goals, from debugging to designing, and from deployment to seamless orchestration.&lt;/p&gt;

&lt;p&gt;The dawn of agent-driven development is here, and it’s poised to reshape the software landscape in profound and exciting ways.&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>Exploring AWS EKS Auto Mode: A Simplified Kubernetes Experience</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sat, 21 Dec 2024 15:52:28 +0000</pubDate>
      <link>https://dev.to/aws-heroes/exploring-aws-eks-auto-mode-a-simplified-kubernetes-experience-3744</link>
      <guid>https://dev.to/aws-heroes/exploring-aws-eks-auto-mode-a-simplified-kubernetes-experience-3744</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pw2tzsawcxbrcs3tih5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5pw2tzsawcxbrcs3tih5.png" alt="Image description" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EKS Auto Mode&lt;/strong&gt; extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;p&gt;EKS Auto Mode provides the following high-level features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamline Kubernetes Cluster Management:&lt;/strong&gt; EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Availability:&lt;/strong&gt; EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Efficiency:&lt;/strong&gt; EKS Auto Mode is designed to compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; EKS Auto Mode uses AMIs that are treated as immutable for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated Upgrades:&lt;/strong&gt; EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Managed Components:&lt;/strong&gt; EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Customisable NodePools and NodeClasses:&lt;/strong&gt; If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While default NodePools and NodeClasses can’t be edited, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before Auto Mode&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd4iqyp86yy0m2p0gxfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd4iqyp86yy0m2p0gxfi.png" alt="Before Auto Mode" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After Auto Mode&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztnqz25tycsxridrirtj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztnqz25tycsxridrirtj.png" alt="After Auto Mode" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon EKS Auto Mode managed instances
&lt;/h2&gt;

&lt;p&gt;Amazon EKS Auto Mode automates routine tasks for creating new EC2 Instances, and attaches them as nodes to your EKS cluster. EKS Auto Mode detects when a workload can’t fit onto existing nodes, and creates a new EC2 Instance.&lt;/p&gt;

&lt;p&gt;Amazon EKS Auto Mode is responsible for creating, deleting, and patching EC2 Instances. You are responsible for the containers and pods deployed on the instance.&lt;/p&gt;

&lt;p&gt;EC2 Instances created by EKS Auto Mode are different from other EC2 Instances, they are managed instances. These managed instances are owned by EKS and are more restricted. You can’t directly access or install software on instances managed by EKS Auto Mode.&lt;/p&gt;

&lt;p&gt;AWS suggests running either EKS Auto Mode or self-managed Karpenter. You can install both during a migration or in an advanced configuration. If you have both installed, configure your node pools so that workloads are associated with either Karpenter or EKS Auto Mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standard EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are responsible for patching and updating the instance.&lt;/li&gt;
&lt;li&gt;EKS is not responsible for the software on the instance.&lt;/li&gt;
&lt;li&gt;You can delete the EC2 Instance using the EC2 API.&lt;/li&gt;
&lt;li&gt;You can use SSH to access the EC2 Instance.&lt;/li&gt;
&lt;li&gt;You determine the operating system and image (AMI).&lt;/li&gt;
&lt;li&gt;You can deploy workloads that rely on Windows or Ubuntu functionality.&lt;/li&gt;
&lt;li&gt;You determine what instance type and family to launch.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EKS Auto Mode managed instance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS automatically patches and updates the instance.&lt;/li&gt;
&lt;li&gt;EKS is responsible for certain software on the instance, such as kubelet, the container runtime, and the operating system.&lt;/li&gt;
&lt;li&gt;EKS determines the number of instances deployed in your account. If you delete a workload, EKS will reduce the number of instances in your account.&lt;/li&gt;
&lt;li&gt;You can deploy pods and containers to the managed instance.&lt;/li&gt;
&lt;li&gt;AWS determines the operating system and image.&lt;/li&gt;
&lt;li&gt;You can deploy containers based on Linux, but without specific OS dependencies.&lt;/li&gt;
&lt;li&gt;AWS determines what instance type and family to launch. You can use a Node Pool to limit the instance types EKS Auto Mode selects from.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Identity and access in EKS Auto Mode
&lt;/h2&gt;

&lt;p&gt;In EKS Auto Mode, AWS IAM roles are automatically mapped to Kubernetes permissions through EKS access entries, removing the need for manual configuration of aws-auth ConfigMaps or custom bindings. When you create a new auto mode cluster, EKS automatically creates the corresponding Kubernetes permissions using Access entries, ensuring that AWS services and cluster components have the appropriate access levels within both the AWS and Kubernetes authorization systems. This automated integration reduces configuration complexity and helps prevent permission-related issues that commonly occur when managing EKS clusters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster IAM role&lt;/strong&gt;&lt;br&gt;
The Cluster IAM role is an AWS Identity and Access Management (IAM) role used by Amazon EKS to manage permissions for Kubernetes clusters. This role grants Amazon EKS the necessary permissions to interact with other AWS services on behalf of your cluster, and is automatically configured with Kubernetes permissions using EKS access entries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node IAM role&lt;/strong&gt;&lt;br&gt;
The Node IAM role is an AWS Identity and Access Management (IAM) role used by Amazon EKS to manage permissions for worker nodes in Kubernetes clusters. This role grants EC2 instances running as Kubernetes nodes the necessary permissions to interact with AWS services and resources, and is automatically configured with Kubernetes RBAC permissions using EKS access entries.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC networking and load balancing in EKS Auto Mode
&lt;/h2&gt;

&lt;p&gt;When you use EKS Auto Mode, AWS manages the VPC Container Network Interface (CNI) configuration and load balancer provisioning for your cluster. You can influence networking behaviors by defining NodeClass objects and applying specific annotations to your Service and Ingress resources, while maintaining the automated operational model that EKS Auto Mode provides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC CNI networking&lt;/strong&gt;&lt;br&gt;
With EKS Auto Mode, you do not directly configure the AWS VPC CNI. AWS manages node and pod networking. Instead, you create a NodeClass Kubernetes object.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure VPC CNI with NodeClass&lt;/strong&gt;&lt;br&gt;
The NodeClass resource in EKS Auto Mode allows you to customize certain aspects of the VPC Container Network Interface (CNI) configuration without directly managing the CNI plugin. Through NodeClass, you can specify security group selections, control node placement across VPC subnets, set SNAT policies, configure network policies, and enable network event logging. This approach maintains the automated operational model of EKS Auto Mode while providing flexibility for network customization.&lt;/p&gt;

&lt;p&gt;You can use a NodeClass to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;elect a Security Group for Nodes&lt;/li&gt;
&lt;li&gt;Control how nodes are placed on VPC Subnets&lt;/li&gt;
&lt;li&gt;Set the Node SNAT Policy to random or disabled&lt;/li&gt;
&lt;li&gt;Set the Network Policy to Default Deny or Default Allow&lt;/li&gt;
&lt;li&gt;Enable Network Event Logging to a file.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;EKS Auto Mode automatically formats and configures NVMe local storage on supported instance types. For nodes with multiple NVMe drives, EKS sets up a RAID 0 array. This automation eliminates the need for manual formatting and RAID configuration of local NVMe storage in EKS clusters.&lt;/li&gt;
&lt;li&gt;Amazon EKS Auto Mode does not support AWS Fault Injection Service. For more information, see Managing Fault Injection Service experiments in the AWS Resilience Hub User Guide.&lt;/li&gt;
&lt;li&gt;You do not need to install the Neuron Device Plugin on EKS Auto Mode nodes.&lt;/li&gt;
&lt;li&gt;If you have other types of nodes in your cluster, you need to configure the Neuron Device plugin to not run on auto mode nodes. For more information, see Control if a workload is deployed on EKS Auto Mode nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EKS Auto Mode supports:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EKS Network Policies.&lt;/li&gt;
&lt;li&gt;The HostPort and HostNetwork options for Kubernetes Pods.&lt;/li&gt;
&lt;li&gt;Pods in public or private subnets.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EKS Auto Mode does not support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Security Groups per Pod (SGPP).&lt;/li&gt;
&lt;li&gt;Custom Networking. The IP Addresses of Pods and Nodes must be from the same CIDR Block.&lt;/li&gt;
&lt;li&gt;Warm IP, warm prefix, and warm ENI configurations.&lt;/li&gt;
&lt;li&gt;Minimum IP targets configuration.&lt;/li&gt;
&lt;li&gt;Enabling or disabling prefix delegation.&lt;/li&gt;
&lt;li&gt;Other configurations supported by the open-source AWS CNI.&lt;/li&gt;
&lt;li&gt;Network Policy configurations such as conntrack timer customization (default is 300s).&lt;/li&gt;
&lt;li&gt;Exporting network event logs to CloudWatch.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/containers/getting-started-with-amazon-eks-auto-mode" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/containers/getting-started-with-amazon-eks-auto-mode&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/automode.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/eks/latest/userguide/automode.html&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Debunking the Misconception: Gaming as a Beneficial Pursuit</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Thu, 31 Aug 2023 18:25:19 +0000</pubDate>
      <link>https://dev.to/aws-heroes/debunking-the-misconception-gaming-as-a-beneficial-pursuit-16ke</link>
      <guid>https://dev.to/aws-heroes/debunking-the-misconception-gaming-as-a-beneficial-pursuit-16ke</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6png1b41wcuqlidomoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6png1b41wcuqlidomoq.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was thrilled to have been in the sunny City of Angels (Los Angeles, CA) for last weekend's (August 25–26, 2023) Amazon Web Services (AWS) VALORANT Champions VIP Hospitality experience. I was involved in some Tech Talks and had the privilege to watch the Riot Games—VALORANT Champions final.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unlike my other tech blogs, this one is different, and I will talk about gaming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We had a very interesting discussion on the concept of the Valorant game with &lt;strong&gt;Alex “Goldenboy” Mendez&lt;/strong&gt;, who is an esports host and commentator with roots as a competitive Call of Duty player. Followed by a roundtable speaking engagement with key technical leaders from AWS and Riot Games.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS x Riot partnership — &lt;a href="https://aws.amazon.com/sports/riot"&gt;https://aws.amazon.com/sports/riot&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS and Riot Games have teamed up to transform how Riot’s Esports content is created and distributed in the cloud and reimagine how fans can experience League of Legends, VALORANT, and League of Legends: Wild Rift Esports events through AWS’ first-ever data analytics integration for esports broadcasts.&lt;/p&gt;

&lt;p&gt;In July 2022, Riot Games announced they were going all-in on AWS, focused on powering Riot’s infrastructure, including their new global content factories that publish Riot’s esports, music production, analytics, statistics, and animation. Other possible workloads include leveraging AWS instances to create a future slate of cloud-based on-demand and live content, and using AWS to power the build-out of Riot’s cloud-first production facilities based in Dublin, APAC, and Seattle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb7zbvta2b7z6ov9xowq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxb7zbvta2b7z6ov9xowq.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Riot has advanced in broadcast and gaming experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Riot already uses AWS’ services across its games to focus on how it designs and deploys content to help provide the best possible game experience, using &lt;a href="https://aws.amazon.com/edge"&gt;edge&lt;/a&gt; computing like AWS &lt;a href="https://aws.amazon.com/outposts"&gt;Outposts&lt;/a&gt; and AWS &lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/localzones"&gt;Local Zones&lt;/a&gt;. Riot aims for latency under 80 milliseconds for League of Legends, and under 35 milliseconds for Valorant. Now Riot is going to use AWS to enhance its esports broadcasts throughout each competitive season for LoL Esports, the Valorant Champions Tour, and Wild Rift Esports.&lt;/p&gt;

&lt;p&gt;As part of the collaboration, Riot is selecting AWS as its official cloud artificial intelligence, cloud machine learning, cloud deep learning, and cloud services provider. It’s impressive that a game company like Riot has to use a lot of those technologies to get its games out these days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Now let us talk about Valorant game and general misconception about gaming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmix232bfuwrf1prsv759.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmix232bfuwrf1prsv759.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Valorant?&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://playvalorant.com"&gt;Valorant&lt;/a&gt; is a team-based first-person tactical hero shooter set in the near future. Players play as one of a set of Agents, characters based on several countries and cultures around the world. In the main game mode, players are assigned to either the attacking or defending team, with each team having five players on it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ycmuvdst938lzhazce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90ycmuvdst938lzhazce.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Objective of the game&lt;/strong&gt;&lt;br&gt;
The aim is to become the first team to win 13 rounds. To accomplish this, players must either eliminate the members of the opposing team or finish the objective assigned to them. For Attackers, this involves planting a spike (or the game’s equivalent of a bomb). Another unique feature of Valorant is its use of character abilities. Each character has a set of abilities that can be used to gain an advantage in combat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Game results:&lt;/strong&gt; &lt;a href="https://valorantesports.com/news/valorant-champions-2023-recap-evil-geniuses-end-their-roller-coaster-year-in-glory/en-gb"&gt;https://valorantesports.com/news/valorant-champions-2023-recap-evil-geniuses-end-their-roller-coaster-year-in-glory/en-gb&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111irao1eilt4k8afj23.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F111irao1eilt4k8afj23.jpg" alt="Image description" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Misconception:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For years, gaming has been associated with negative connotations. Concerns about addiction, violence, social isolation, and negative impacts on physical and mental health have fueled the belief that gaming is a bad habit. However, it’s important to remember that, like any form of entertainment, moderation is key. Just as binge-watching television shows or spending excessive time on social media can have negative effects, so can overindulging in gaming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejsyftotevzokrb9wmyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejsyftotevzokrb9wmyf.png" alt="Image description" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Reality: The Advantages of Gaming&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive Benefits:&lt;/strong&gt; Contrary to popular belief, gaming can have a positive impact on cognitive abilities. Many video games like Valorant require strategic thinking, problem-solving, and quick decision-making. Games like puzzle-solving adventures, strategy simulations, and even some action games enhance players’ analytical and critical thinking skills.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordination and Reflexes:&lt;/strong&gt; Action-oriented games like Valorant demand high levels of hand-eye coordination and fast reflexes. Gamers often develop better motor skills and reaction times, which can extend to improved performance in daily activities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teamwork and Collaboration:&lt;/strong&gt; Multiplayer games like Valorant, whether played online or in person, encourage players to collaborate, communicate, and work as a team to achieve goals. This can translate into real-world scenarios where teamwork is essential.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creativity and Imagination:&lt;/strong&gt; Many games immerse players in expansive worlds and narratives, fostering creativity and imagination. Games like Minecraft have even been used as tools for education, allowing players to construct and explore their own imaginative creations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stress Relief:&lt;/strong&gt; Gaming can provide an effective escape from the stresses of everyday life. Engaging in a virtual world can be a healthy way to unwind and relax, much like reading a book or watching a movie.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Opportunities:&lt;/strong&gt; Simulation games, historical strategy games, and educational games offer players a chance to learn about various subjects in an interactive manner. Players can gain insights into history, science, economics, and more, often without even realising it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence and Resilience:&lt;/strong&gt; Many games present challenges and obstacles that require repeated attempts to overcome. This cultivates persistence, resilience, and the understanding that success often comes after multiple failures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social Interaction:&lt;/strong&gt; Contrary to the notion that gaming promotes social isolation, online multiplayer games facilitate social interaction among players from around the world. These interactions can lead to friendships, teamwork, and even a sense of belonging within gaming communities.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnod2h0wevfqlqjn5odxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnod2h0wevfqlqjn5odxu.png" alt="Image description" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Moderation is Key&lt;/strong&gt;&lt;br&gt;
It’s essential to acknowledge that while gaming offers numerous advantages, excessive or irresponsible gaming can indeed lead to negative consequences. Balance is crucial. Just as one wouldn’t spend every waking moment watching television, individuals should manage their gaming time wisely.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
Gaming is a multifaceted hobby that extends far beyond mere entertainment. It’s a realm where creativity, critical thinking, teamwork, and stress relief coalesce. By dispelling the misconception that gaming is inherently a bad habit, we can better appreciate its potential to enhance cognitive abilities, improve social skills, and provide a healthy outlet for relaxation.&lt;/p&gt;

&lt;p&gt;So, let’s shift our perspective and recognise that the world of gaming is rich with opportunities for growth and enrichment. I personally look at it as a professional sport. As with any activity, the key lies in approaching it mindfully and striking a balance that aligns with a fulfilling and well-rounded life.&lt;/p&gt;

</description>
      <category>gaming</category>
      <category>valorant</category>
      <category>riotgames</category>
      <category>aws</category>
    </item>
    <item>
      <title>From Imagination to Reality: Exploring the Path of Generative AI</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Tue, 18 Jul 2023 05:10:55 +0000</pubDate>
      <link>https://dev.to/aws-heroes/from-imagination-to-reality-exploring-the-path-of-generative-ai-4c36</link>
      <guid>https://dev.to/aws-heroes/from-imagination-to-reality-exploring-the-path-of-generative-ai-4c36</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Generative AI isn’t just a technology or a business case — it is a key part of a society in which people and machines work together.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgaojlx546mdrzbq1ke6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgaojlx546mdrzbq1ke6.png" alt="Image description" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is generative AI?
&lt;/h2&gt;

&lt;p&gt;Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs.&lt;/p&gt;

&lt;p&gt;Generative AI uses a number of techniques that continue to evolve. Foremost are AI foundation models, which are trained on a broad set of unlabeled data that can be used for different tasks, with additional fine-tuning. Complex math and enormous computing power are required to create these trained models, but they are, in essence, prediction algorithms.&lt;/p&gt;

&lt;p&gt;Today, generative AI most commonly creates content in response to natural language requests — it doesn’t require knowledge of or entering code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg8ahvnvlvec3vp5kvei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg8ahvnvlvec3vp5kvei.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the benefits and applications of generative AI?
&lt;/h2&gt;

&lt;p&gt;Foundation models, including generative pretrained transformers (which drive ChatGPT), are among the AI architecture innovations that can be used to automate, augment humans or machines, and autonomously execute business and IT processes.&lt;/p&gt;

&lt;p&gt;The benefits of generative AI include faster product development, enhanced customer experience, and improved employee productivity, but the specifics depend on the use case. End users should be realistic about the value they are looking to achieve, especially when using a service as is, which has major limitations. Generative AI creates artifacts that can be inaccurate or biased, making human validation essential and potentially limiting the time it saves workers. Gartner recommends connecting use cases to KPIs to ensure that any project either improves operational efficiency, creates net new revenue, or creates better experiences.&lt;/p&gt;

&lt;p&gt;In a recent Gartner webinar poll of more than 2,500 executives, 38% indicated that customer experience and retention are the primary purposes of their generative AI investments. This was followed by revenue growth (26%), cost optimisation (17%), and business continuity (7%).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx99wbhtu6x6z6549row5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx99wbhtu6x6z6549row5.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What are the risks of generative AI?
&lt;/h2&gt;

&lt;p&gt;The risks associated with generative AI are significant and rapidly evolving. A wide array of threat actors have already used the technology to create “deep fakes” or copies of products, and generate artifacts to support increasingly complex scams.&lt;/p&gt;

&lt;p&gt;ChatGPT and other tools like it are trained on large amounts of publicly available data. They are not designed to be compliant with General Data Protection Regulation (GDPR) and other copyright laws, so it’s imperative to pay close attention to your enterprises’ uses of the platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Oversight risks to monitor include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lack of transparency. Generative AI and ChatGPT models are unpredictable, and not even the companies behind them always understand everything about how they work.&lt;/li&gt;
&lt;li&gt;Accuracy. Generative AI systems sometimes produce inaccurate and fabricated answers. Assess all outputs for accuracy, appropriateness and actual usefulness before relying on or publicly distributing information.&lt;/li&gt;
&lt;li&gt;Bias. You need policies or controls in place to detect biased outputs and deal with them in a manner consistent with company policy and any relevant legal requirements.&lt;/li&gt;
&lt;li&gt;Intellectual property (IP) and copyright. There are currently no verifiable data governance and protection assurances regarding confidential enterprise information. Users should assume that any data or queries they enter into the ChatGPT and its competitors will become public information, and we advise enterprises to put in place controls to avoid inadvertently exposing IP.&lt;/li&gt;
&lt;li&gt;Cybersecurity and fraud. Enterprises must prepare for malicious actors’ use of generative AI systems for cyber and fraud attacks, such as those that use deep fakes for social engineering of personnel, and ensure mitigating controls are put in place. Confer with your cyber-insurance provider to verify the degree to which your existing policy covers AI-related breaches.&lt;/li&gt;
&lt;li&gt;Sustainability. Generative AI uses significant amounts of electricity. Choose vendors that reduce power consumption and leverage high-quality renewable energy to mitigate the impact on your sustainability goals.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are some practical uses of generative AI today?
&lt;/h2&gt;

&lt;p&gt;The field of generative AI will progress rapidly in both scientific discovery and technology commercialisation, but use cases are emerging quickly in creative content, content improvement, synthetic data, generative engineering, and generative design.&lt;/p&gt;

&lt;p&gt;In-use, high-level practical applications today include the following.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Written content augmentation and creation: Producing a “draft” output of text in a desired style and length&lt;/li&gt;
&lt;li&gt;Question answering and discovery: Enabling users to locate answers to input, based on data and prompt information&lt;/li&gt;
&lt;li&gt;Tone: Text manipulation, to soften language or professionalize text&lt;/li&gt;
&lt;li&gt;Summarisation: Offering shortened versions of conversations, articles, emails and webpages&lt;/li&gt;
&lt;li&gt;Simplification: Breaking down titles, creating outlines and extracting key content&lt;/li&gt;
&lt;li&gt;Classification of content for specific use cases: Sorting by sentiment, topic, etc.&lt;/li&gt;
&lt;li&gt;Chatbot performance improvement: Bettering “sentity” extraction, whole-conversation sentiment classification and generation of journey flows from general descriptions&lt;/li&gt;
&lt;li&gt;Software coding: Code generation, translation, explanation and verification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Emerging use cases with long-term impacts include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating medical images that show the future development of a disease​&lt;/li&gt;
&lt;li&gt;Synthetic data helping augment scarce data, mitigate bias, preserve data privacy and simulate future scenarios&lt;/li&gt;
&lt;li&gt;Applications proactively suggesting additional actions to users and providing them with information&lt;/li&gt;
&lt;li&gt;Legacy code modernisation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Should I craft a usage policy for generative AI?
&lt;/h2&gt;

&lt;p&gt;Your workforce is likely already using generative AI, either on an experimental basis or to support their job-related tasks. To avoid “shadow” usage and a false sense of compliance, Gartner recommends crafting a usage policy rather than enacting an outright ban.&lt;/p&gt;

&lt;p&gt;Keep the policy simple — it can be as streamlined as three don’ts and two do’s if using ChatGPT or other off-the-shelf model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don’t input any personally identifiable information.&lt;/li&gt;
&lt;li&gt;Don’t input any sensitive information.&lt;/li&gt;
&lt;li&gt;Don’t input any company IP.&lt;/li&gt;
&lt;li&gt;Do turn off history if using external tools (like ChatGPT) that enable that choice.&lt;/li&gt;
&lt;li&gt;Do closely monitor outputs, which are subject to sometimes subtle but meaningful hallucinations, factual errors, and biased or inappropriate statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the company is using its own instance of a large language model, the privacy concerns that inform limiting inputs go away. However, the need to keep a close eye on outputs remains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfvyncw4r05ageoxvh8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfvyncw4r05ageoxvh8o.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Where should I start with generative AI?
&lt;/h2&gt;

&lt;p&gt;Many enterprises have generative AI pilots for code generation, text generation, or visual design underway. To establish a pilot, you can take one of three routes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Off-the-shelf. Use an existing foundational model directly by inputting prompts. You might, for example, ask the model to create a job description for a software engineer or suggest alternative subject lines for marketing emails.&lt;/li&gt;
&lt;li&gt;Prompt engineering. Program and connect software to a foundational model and leverage it. This technique, which is the most common of the three, allows you to use public services while protecting IP and leveraging private data to create more precise, specific, and useful responses. Building an HR benefits chatbot that answers employee questions about company-specific policies is an example of prompt engineering.&lt;/li&gt;
&lt;li&gt;Custom. Building a new foundational model goes beyond the reach of most companies, but it’s possible to tune a model. This involves adding a layer or proprietary data in a way that significantly alters the way the foundational model behaves. While costly, customising a model offers the highest level of flexibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Who are the major tech providers in the generative AI market?
&lt;/h2&gt;

&lt;p&gt;The Generative AI marketplace is on fire. Beyond the big platform players, there are hundreds of specialty providers funded by ample venture capital and a wave of new open-source models and capabilities. Enterprise application providers, such as Salesforce and SAP, are building LLM capabilities into their platforms. Organisations like Microsoft, Google, Amazon Web Services (AWS), and IBM have invested hundreds of millions of dollars and massive compute power to build the foundational models on which services like ChatGPT and others depend.&lt;/p&gt;

&lt;p&gt;Gartner considers the current major players to be as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://ai.google/discover/generativeai"&gt;Google&lt;/a&gt; has two large language models, Palm, a multimodal model, and Bard, a pure language model. They are embedding their generative AI technology into their suite of workplace applications, which will immediately get it in the hands of millions of people.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://news.microsoft.com/reinventing-productivity"&gt;Microsoft&lt;/a&gt; and &lt;a href="https://openai.com/blog/chatgpt"&gt;OpenAI&lt;/a&gt; are marching in lockstep. Like Google, Microsoft is embedding generative AI technology into its products, but it has the first-mover advantage and buzz of ChatGPT on its side.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/generative-ai"&gt;Amazon&lt;/a&gt; has partnered with &lt;a href="https://huggingface.co"&gt;Hugging Face&lt;/a&gt;, which has a number of LLMs available on an open-source basis, to build solutions. Amazon also has Bedrock, which provides access to generative AI on the cloud via AWS, and has announced plans for Titan, a set of two AI models that create text and improve searches and personalisation.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.ibm.com/events/think/generative-ai"&gt;IBM&lt;/a&gt; has multiple foundation models and a strong ability to fine-tune both its own and third-party models by injecting data and retraining and employing the model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;This blog is an extract from the Gartner site; comments and credits should go to the original authors from Gartner.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reference: &lt;a href="https://www.gartner.com/en/insights/generative-ai-for-business"&gt;https://www.gartner.com/en/insights/generative-ai-for-business&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>generativeai</category>
      <category>ai</category>
    </item>
    <item>
      <title>Building and Launching a Serverless GraphQL React Application with AWS Amplify: A Step-by-Step Guide</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sun, 23 Apr 2023 05:01:27 +0000</pubDate>
      <link>https://dev.to/aws-heroes/building-and-launching-a-serverless-graphql-react-application-with-aws-amplify-a-step-by-step-guide-366k</link>
      <guid>https://dev.to/aws-heroes/building-and-launching-a-serverless-graphql-react-application-with-aws-amplify-a-step-by-step-guide-366k</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja3771gacjsyiszp8f5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja3771gacjsyiszp8f5c.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Serverless computing has been gaining popularity in recent years due to its numerous benefits, such as scalability, cost-effectiveness, and flexibility. AWS Amplify is a development platform that allows developers to build and deploy serverless applications quickly and easily. In this blog post, I will guide you through the process of building and launching a serverless GraphQL React application with AWS Amplify.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before getting started, you will need the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;An AWS account.&lt;/li&gt;
&lt;li&gt;Node.js installed on your machine.&lt;/li&gt;
&lt;li&gt;Basic knowledge of React and GraphQL.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step-1: Create a new React app
&lt;/h2&gt;

&lt;p&gt;To get started, let’s create a new React application. Open up your terminal and run the following command:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;npx create-react-app my-app&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon  npx create-react-app my-app                                                                                                                                  ✔  10:27:05

Creating a new React app in /Users/macpro/Documents/amplify-hackathon/my-app.

Installing packages. This might take a couple of minutes.
Installing react, react-dom, and react-scripts with cra-template...


added 1423 packages in 43s

235 packages are looking for funding
  run `npm fund` for details

Initialized a git repository.

Installing template dependencies using npm...

added 62 packages, and changed 1 package in 8s

235 packages are looking for funding
  run `npm fund` for details
Removing template package using npm...


removed 1 package, and audited 1485 packages in 3s

235 packages are looking for funding
  run `npm fund` for details

6 high severity vulnerabilities

To address all issues (including breaking changes), run:
  npm audit fix --force

Run `npm audit` for details.

Created git commit.

Success! Created my-app at /Users/macpro/Documents/amplify-hackathon/my-app
Inside that directory, you can run several commands:

  npm start
    Starts the development server.

  npm run build
    Bundles the app into static files for production.

  npm test
    Starts the test runner.

  npm run eject
    Removes this tool and copies build dependencies, configuration files
    and scripts into the app directory. If you do this, you can’t go back!

We suggest that you begin by typing:

  cd my-app
  npm start

Happy hacking!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This will create a new React app in a directory called my-app.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main !4 ?5  ll                                                                                ✔  11:00:37
total 5136
-rw-r--r--     1 macpro  staff   3.3K 23 Apr 10:27 README.md
drwxr-xr-x     9 macpro  staff   288B 23 Apr 10:35 amplify
drwxr-xr-x  1053 macpro  staff    33K 23 Apr 10:52 node_modules
-rw-r--r--     1 macpro  staff   1.6M 23 Apr 10:53 package-lock.json
-rw-r--r--     1 macpro  staff   867B 23 Apr 10:53 package.json
drwxr-xr-x     8 macpro  staff   256B 23 Apr 10:27 public
drwxr-xr-x    14 macpro  staff   448B 23 Apr 11:00 src
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To &lt;strong&gt;start&lt;/strong&gt; our React app we can run from the &lt;em&gt;my-app&lt;/em&gt; directory &lt;strong&gt;“npm start”&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main  npm start                                                                                                                   ✔  07:42:12

&amp;gt; my-app@0.1.0 start
&amp;gt; react-scripts start

(node:29873) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:29873) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
Starting the development server...
Compiled successfully!

You can now view my-app in the browser.

  Local:            http://localhost:3000
  On Your Network:  http://192.168.0.87:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

webpack compiled successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start the development server at &lt;strong&gt;&lt;em&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;. Press ctrl+c to stop the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69c3d6r2z9z76x12irh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69c3d6r2z9z76x12irh7.png" alt="Image description" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-2: Install AWS Amplify CLI
&lt;/h2&gt;

&lt;p&gt;Next, we need to install the AWS Amplify CLI. Open up your terminal and run the following command:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;npm install -g @aws-amplify/cli&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main !5 ?3  npm install -g @aws-amplify/cli           1 ✘  4s  22:11:35

changed 26 packages in 25s

7 packages are looking for funding
  run `npm fund` for details
npm notice
npm notice New minor version of npm available! 9.4.0 -&amp;gt; 9.6.5
npm notice Changelog: https://github.com/npm/cli/releases/tag/v9.6.5
npm notice Run npm install -g npm@9.6.5 to update!
npm notice
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This will install the Amplify CLI globally on your machine or upgrade the old version.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 3: Configure Amplify
&lt;/h2&gt;

&lt;p&gt;Now that we have installed the Amplify CLI, let’s configure it. Run the following command in your terminal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main  amplify configure            ✔  07:48:51
Follow these steps to set up access to your AWS account:

Sign in to your AWS administrator account:
https://console.aws.amazon.com/
Press Enter to continue

Specify the AWS Region
? region:  us-east-1
Follow the instructions at
https://docs.amplify.aws/cli/start/install/#configure-the-amplify-cli

to complete the user creation in the AWS console
https://console.aws.amazon.com/iamv2/home#/users/create
Press Enter to continue

Enter the access key of the newly created user:
? accessKeyId:  # YOUR_ACCESS_KEY_ID
? secretAccessKey:  # YOUR_SECRET_ACCESS_KEY
This would update/create the AWS Profile in your local machine
? Profile Name:  # (default)

Successfully set up the new user.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This will prompt you to enter your AWS credentials. Follow the prompts to enter your Access Key ID and Secret Access Key.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 4: Initialise Amplify
&lt;/h2&gt;

&lt;p&gt;Now that Amplify is configured, let’s initialise it in our React app. Run the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;amplify init&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main  amplify init                                                                                                                                    ✔  10:28:31

Note: It is recommended to run this command from the root of your app directory
? Enter a name for the project myapp
The following configuration will be applied:

Project information
| Name: myapp
| Environment: dev
| Default editor: Visual Studio Code
| App type: javascript
| Javascript framework: react
| Source Directory Path: src
| Distribution Directory Path: build
| Build Command: npm run-script build
| Start Command: npm run-script start

? Initialize the project with the above configuration? Yes
Using default provider  awscloudformation
? Select the authentication method you want to use: AWS profile

For more information on AWS Profiles, see:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

? Please choose the profile you want to use default
Adding backend environment dev to AWS Amplify app: d2fcsc4w0p2um2

# This "input" configures a global authorization rule to enable public access to
Deployment completed.
Deployed root stack myapp [ ======================================== ] 4/4
 amplify-myapp-dev-103057       AWS::CloudFormation::Stack     CREATE_COMPLETE                Sun Apr 23 2023 10:31:34…
 DeploymentBucket               AWS::S3::Bucket                CREATE_COMPLETE                Sun Apr 23 2023 10:31:25…
 AuthRole                       AWS::IAM::Role                 CREATE_COMPLETE                Sun Apr 23 2023 10:31:18…
 UnauthRole                     AWS::IAM::Role                 CREATE_COMPLETE                Sun Apr 23 2023 10:31:19…

✔ Help improve Amplify CLI by sharing non sensitive configurations on failures (y/N) · yes
Deployment state saved successfully.
✔ Initialized provider successfully.
✅ Initialized your environment successfully.

Your project has been successfully initialized and connected to the cloud!

Some next steps:
"amplify status" will show you what you've added already and if it's locally configured or deployed
"amplify add &amp;lt;category&amp;gt;" will allow you to add features like user login or a backend API
"amplify push" will build all your local backend resources and provision it in the cloud
"amplify console" to open the Amplify Console and view your project status
"amplify publish" will build all your local backend and frontend resources (if you have hosting category added) and provision it in the cloud

Pro tip:
Try "amplify add api" to create a backend API and then "amplify push" to deploy everything
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This will initialise Amplify in your React app. Follow the prompts to configure Amplify.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 5: Add a GraphQL API
&lt;/h2&gt;

&lt;p&gt;With Amplify initialised, we can now add a GraphQL API. Run the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;amplify add api&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main !1 ?1  amplify add api                                                                                                                    ✔  3m 2s  10:32:13

? Select from one of the below mentioned services: GraphQL
? Here is the GraphQL API that we will create. Select a setting to edit or continue Continue
? Choose a schema template: Single object with fields (e.g., “Todo” with ID, name, description)

⚠️  WARNING: your GraphQL API currently allows public create, read, update, and delete access to all models via an API Key. To configure PRODUCTION-READY authorization rules, review: https://docs.amplify.aws/cli/graphql/authorization-rules

✅ GraphQL schema compiled successfully.

Edit your schema at ~/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema.graphql or place .graphql files in a directory at ~/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema
✔ Do you want to edit the schema now? (Y/n) · no
✅ Successfully added resource myapp locally

✅ Some next steps:
"amplify push" will build all your local backend resources and provision it in the cloud
"amplify publish" will build all your local backend and frontend resources (if you have hosting category added) and provision it in the cloud
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This will prompt you to choose a type of API. Select “GraphQL” and follow the prompts to configure your API.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 6: Define your GraphQL schema
&lt;/h2&gt;

&lt;p&gt;Now that we have added a GraphQL API to our Amplify project, we need to define the schema. The schema defines the types and operations that can be performed on our API. Open up the &lt;em&gt;amplify/backend/api/myapp/schema.graphql&lt;/em&gt; file in your code editor and replace the contents with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Todo @model {
  id: ID!
  name: String!
  description: String
  done: Boolean
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This code defines a new Todo type with fields for id, name, description, and done. The @model directive tells Amplify to create a DynamoDB table and all the required CRUD operations for this type.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 7: Deploy your GraphQL API
&lt;/h2&gt;

&lt;p&gt;Now that we have defined our schema, we can deploy our GraphQL API. Enter the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;amplify push&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/Documents/amplify-hackathon/my-app  main !1 ?1  amplify push                                                                                                                         ✔  34s  10:34:53
⠏ Fetching updates to backend environment: dev from the cloud.
⚠️  WARNING: Some types do not have authorization rules configured. That means all create, read, update, and delete operations are denied on these types:
  - Todo
Learn more about "@auth" authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules
✅ GraphQL schema compiled successfully.

Edit your schema at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema.graphql or place .graphql files in a directory at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema
✔ Successfully pulled backend environment dev from the cloud.
⠴ Building resource api/myapp
⚠️  WARNING: Some types do not have authorization rules configured. That means all create, read, update, and delete operations are denied on these types:
  - Todo
Learn more about "@auth" authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules
✅ GraphQL schema compiled successfully.

Edit your schema at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema.graphql or place .graphql files in a directory at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema

    Current Environment: dev

┌──────────┬───────────────┬───────────┬───────────────────┐
│ Category │ Resource name │ Operation │ Provider plugin   │
├──────────┼───────────────┼───────────┼───────────────────┤
│ Api      │ myapp         │ Create    │ awscloudformation │
└──────────┴───────────────┴───────────┴───────────────────┘
✔ Are you sure you want to continue? (Y/n) · yes

⚠️  WARNING: Some types do not have authorization rules configured. That means all create, read, update, and delete operations are denied on these types:
  - Todo
Learn more about "@auth" authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules
✅ GraphQL schema compiled successfully.

Edit your schema at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema.graphql or place .graphql files in a directory at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema
⠴ Building resource api/myapp
⚠️  WARNING: Some types do not have authorization rules configured. That means all create, read, update, and delete operations are denied on these types:
  - Todo
Learn more about "@auth" authorization rules here: https://docs.amplify.aws/cli/graphql/authorization-rules
✅ GraphQL schema compiled successfully.

Edit your schema at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema.graphql or place .graphql files in a directory at /Users/macpro/Documents/amplify-hackathon/my-app/amplify/backend/api/myapp/schema
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target javascript
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2

Deployment completed.
Deploying root stack myapp [ ---------------------------------------- ] 0/2
 amplify-myapp-dev-103057       AWS::CloudFormation::Stack     UPDATE_IN_PROGRESS             Sun Apr 23 2023 10:36:28…
 apimyapp                       AWS::CloudFormation::Stack     CREATE_IN_PROGRESS             Sun Apr 23 2023 10:36:32…
Deployed api myapp [ ======================================== ] 6/6
import React, { useEffect, useState } from 'react';
 GraphQLAPI                     AWS::AppSync::GraphQLApi       CREATE_COMPLETE                Sun Apr 23 2023 10:36:40…
 GraphQLAPIDefaultApiKey215A6D… AWS::AppSync::ApiKey           CREATE_COMPLETE                Sun Apr 23 2023 10:36:44…
import logo from './logo.svg';
import './App.css';

function App() {
  return (
    &amp;lt;div className="App"&amp;gt;
      &amp;lt;header className="App-header"&amp;gt;
        &amp;lt;img src={logo} className="App-logo" alt="logo" /&amp;gt;
        &amp;lt;p&amp;gt;
          Edit &amp;lt;code&amp;gt;src/App.js&amp;lt;/code&amp;gt; and save to reload.
        &amp;lt;/p&amp;gt;
        &amp;lt;a
          className="App-link"
          href="https://reactjs.org"
          target="_blank"
          rel="noopener noreferrer"
        &amp;gt;
          Learn React
        &amp;lt;/a&amp;gt;
      &amp;lt;/header&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

 GraphQLAPINONEDS95A13CF0       AWS::AppSync::DataSource       CREATE_COMPLETE                Sun Apr 23 2023 10:36:44…
 GraphQLAPITransformerSchema3C… AWS::AppSync::GraphQLSchema    CREATE_COMPLETE                Sun Apr 23 2023 10:37:45…
 Todo                           AWS::CloudFormation::Stack     CREATE_COMPLETE                Sun Apr 23 2023 10:38:35…
 CustomResourcesjson            AWS::CloudFormation::Stack     CREATE_COMPLETE                Sun Apr 23 2023 10:38:49…
import React from 'react';

✔ Generated GraphQL operations successfully and saved at src/graphql
Deployment state saved successfully.

GraphQL endpoint: https://lsa4eg44ufffhg3zwrottvemfq.appsync-api.us-east-1.amazonaws.com/graphql
GraphQL API KEY: da2-anzozaxgsrhl7jr3yxam4k3oee

GraphQL transformer version: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Follow the prompts to deploy your API. Amplify will create a new DynamoDB table and all the required CRUD operations for our Todo type.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 8: Add the AWS AppSync SDK to your app
&lt;/h2&gt;

&lt;p&gt;To interact with our GraphQL API, we need to add the AWS AppSync SDK to our app. Enter the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;npm install aws-appsync&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main !1 ?3  npm install aws-appsync                                                                                                           ✔  4m 37s  10:39:44

npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.

added 205 packages, and audited 1690 packages in 35s

238 packages are looking for funding
  run `npm fund` for details

6 high severity vulnerabilities

To address all issues (including breaking changes), run:
  npm audit fix --force

Run `npm audit` for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 9: Configure the AWS AppSync SDK
&lt;/h2&gt;

&lt;p&gt;Now that we have added the AWS AppSync SDK to our app, we need to configure it to work with our API. Create a new file called &lt;em&gt;aws-exports.js&lt;/em&gt; in the &lt;em&gt;src&lt;/em&gt; folder and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const awsmobile = {
  "aws_project_region": "&amp;lt;your-project-region&amp;gt;",
  "aws_appsync_graphqlEndpoint": "&amp;lt;your-api-endpoint&amp;gt;",
  "aws_appsync_region": "&amp;lt;your-api-region&amp;gt;",
  "aws_appsync_authenticationType": "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Now, replace the placeholders (, , and ) with the appropriate values from your Amplify project. You can find these values in the aws-exports.js file in the root of your project directory.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 10: Create a component to display a list of todos
&lt;/h2&gt;

&lt;p&gt;Now that we have configured the AWS AppSync SDK, we can create a component to display a list of todos. Create a new file called TodoList.js in the src folder and add the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React, { useEffect, useState } from 'react';
import { API, graphqlOperation } from 'aws-amplify';

import { listTodos } from './graphql/queries';

function TodoList() {
  const [todos, setTodos] = useState([]);

  useEffect(() =&amp;gt; {
    async function fetchTodos() {
      try {
        const todoData = await API.graphql(graphqlOperation(listTodos));
        const todos = todoData.data.listTodos.items;
        setTodos(todos);
      } catch (err) {
        console.log('error fetching todos: ', err);
      }
    }

    fetchTodos();
  }, []);

  return (
    &amp;lt;div&amp;gt;
      &amp;lt;h2&amp;gt;Todos&amp;lt;/h2&amp;gt;
      &amp;lt;ul&amp;gt;
        {todos.map(todo =&amp;gt; (
          &amp;lt;li key={todo.id}&amp;gt;
            &amp;lt;strong&amp;gt;{todo.name}&amp;lt;/strong&amp;gt;: {todo.description}
          &amp;lt;/li&amp;gt;
        ))}
      &amp;lt;/ul&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

export default TodoList;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;This code defines a new component called TodoList that fetches a list of todos from our GraphQL API and displays them in an unordered list.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 11: Add the TodoList component to your app
&lt;/h2&gt;

&lt;p&gt;Now that we have created the &lt;em&gt;TodoList&lt;/em&gt; component, we can add it to our app. Open up the &lt;em&gt;App.js&lt;/em&gt; file in your code editor and replace the contents with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import './App.css';

import TodoList from './TodoList';

function App() {
  return (
    &amp;lt;div className="App"&amp;gt;
      &amp;lt;header className="App-header"&amp;gt;
        &amp;lt;h1&amp;gt;My Todo App&amp;lt;/h1&amp;gt;
      &amp;lt;/header&amp;gt;
      &amp;lt;main&amp;gt;
        &amp;lt;TodoList /&amp;gt;
      &amp;lt;/main&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

export default App;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;_This code imports the TodoList component and adds it to our app.-&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 12: Launch your app
&lt;/h2&gt;

&lt;p&gt;Now that we have added the TodoList component to our app, we can launch it. Enter the following command in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;npm start&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app/src  main !4 ?6  npm start                                                                                                                        ✔  11s  10:53:26

&amp;gt; my-app@0.1.0 start
&amp;gt; react-scripts start

(node:45156) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:45156) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
Starting the development server...
Compiled successfully!

You can now view my-app in the browser.

  Local:            http://localhost:3000
  On Your Network:  http://192.168.0.87:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

webpack compiled successfullyThis will start the development server and open your app in your default browser. You should see a list of todos displayed on the screen.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj9fz8xtue8lx3gc54kr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj9fz8xtue8lx3gc54kr.png" alt="Image description" width="800" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;With this app as a starting point with no functionality, you can add new features and functionality to create a powerful and scalable application.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  ERROR:
&lt;/h2&gt;

&lt;p&gt;If you face the error below Module not found: Error: Can’t resolve &lt;em&gt;‘aws-amplify’&lt;/em&gt;, install this module.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; ~/Documents/amplify-hackathon/my-app  main !4 ?5  npm start                                                                                                                             ✔  4s  10:45:33

&amp;gt; my-app@0.1.0 start
&amp;gt; react-scripts start

(node:44800) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
(Use `node --trace-deprecation ...` to show where the warning was created)
(node:44800) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
Starting the development server...
import React, { useEffect, useState } from 'react';
Failed to compile.

Module not found: Error: Can't resolve 'aws-amplify' in '/Users/macpro/Documents/amplify-hackathon/my-app/src'
ERROR in ./src/TodoList.js 6:0-52
Module not found: Error: Can't resolve 'aws-amplify' in '/Users/macpro/Documents/amplify-hackathon/my-app/src'

webpack compiled with 1 error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To fix the above error, we need to install aws-amplify module by running this command “npm install aws-amplify”&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app/src  main !4 ?6  npm install aws-amplify                                                                                                               ✔  10:51:00

npm WARN deprecated source-map-url@0.4.1: See https://github.com/lydell/source-map-url#deprecated
npm WARN deprecated urix@0.1.0: Please see https://github.com/lydell/urix#deprecated
npm WARN deprecated resolve-url@0.2.1: https://github.com/lydell/resolve-url#deprecated
npm WARN deprecated source-map-resolve@0.5.3: See https://github.com/lydell/source-map-resolve#deprecated
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
import React, { useEffect, useState } from 'react';
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uuid@3.4.0: Please upgrade  to version 7 or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v8.dev/blog/math-random for details.
npm WARN deprecated uglify-es@3.3.9: support for ECMAScript is superseded by `uglify-js` as of v3.13.0

added 1871 packages, and audited 3561 packages in 1m

251 packages are looking for funding
  run `npm fund` for details

6 high severity vulnerabilities

To address all issues (including breaking changes), run:
  npm audit fix --force

Run `npm audit` for details.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 13: Clean-up
&lt;/h2&gt;

&lt;p&gt;Within the project directory that you want to delete, run the following amplify delete command:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For locally accessible projects&lt;/strong&gt;&lt;br&gt;
Within the project directory that you want to delete, run the following amplify delete command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For cloud-based projects&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pull the backend environment associated with your application to your local environment by running the following amplify pull command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify pull
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Within the project directory that you want to delete, run the following amplify delete command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;amplify delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Output:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/Documents/amplify-hackathon/my-app  main !5 ?5  amplify delete                                                                                                                  ✔  5s  11:55:13
✔ Are you sure you want to continue? This CANNOT be undone. (This will delete all the environments of the project from the cloud and wipe out all the local files created by Amplify CLI) (y/N) · yes
⠋ Deleting resources from the cloud. This will take a few minutes.
Deleting env: dev.
✔ Project deleted in the cloud.
✅ Project deleted locally.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through this tutorial, I have demonstrated how easy it can be to build a serverless application using AWS Amplify, which is very basic and has no functionality. I encourage you to continue exploring the many features and capabilities of Amplify and to use it to build your own amazing serverless applications.&lt;/p&gt;

&lt;p&gt;If you want to build real-time multi-user experiences using GraphQL on AWS Amplify - &lt;a href="https://aws.amazon.com/blogs/mobile/build-real-time-multi-user-experiences-using-graphql-on-aws-amplify" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/mobile/build-real-time-multi-user-experiences-using-graphql-on-aws-amplify&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to build AWS AppSync GraphQL Photo Sample application - &lt;a href="https://github.com/aws-samples/aws-amplify-graphql" rel="noopener noreferrer"&gt;https://github.com/aws-samples/aws-amplify-graphql&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Legends of Linux Trivia — Red Hat Free Badges</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sun, 26 Mar 2023 13:05:27 +0000</pubDate>
      <link>https://dev.to/aws-heroes/legends-of-linux-trivia-red-hat-free-badges-1g93</link>
      <guid>https://dev.to/aws-heroes/legends-of-linux-trivia-red-hat-free-badges-1g93</guid>
      <description>&lt;h2&gt;
  
  
  How well do you know Linux?
&lt;/h2&gt;

&lt;p&gt;Red Hat, a leading provider of open source software solutions, is now offering a fantastic opportunity for Linux enthusiasts and collectors to earn badges that showcase their expertise in Linux. All you need to do is successfully complete a &lt;a href="https://redhatdg.co1.qualtrics.com/jfe/form/SV_bjRFSHqPdTpIjoa?Q_R=R_2xWsDMYVUqedzyX&amp;amp;Choose_Quiz=true&amp;amp;Q_R_DEL=1"&gt;Red Hat quiz&lt;/a&gt; and score 100% to earn one of the three badges on offer. You can choose to earn one, two or all three badges, depending on how much of a Linux expert you are.&lt;/p&gt;

&lt;p&gt;It’s a great way to demonstrate your proficiency in Linux and add some impressive credentials to your resume. So why wait? Take the quiz and show the world how well you know Linux!&lt;/p&gt;

&lt;h2&gt;
  
  
  Red Hat Enterprise Linux Trivia — Red Hat Titan
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6km0bf10l29lft2ur5xx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6km0bf10l29lft2ur5xx.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux Command-Line Trivia — Command-Line Captain
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws3835i47rh5a34vpg42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws3835i47rh5a34vpg42.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux Open Source Trivia — Open Sourcer-er
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nfbedlcv0zqf8igm7cx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nfbedlcv0zqf8igm7cx.png" alt="Image description" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>redhat</category>
      <category>linux</category>
      <category>opensource</category>
    </item>
    <item>
      <title>AWS Hero: A Distinction Worth Earning</title>
      <dc:creator>Madhu Kumar</dc:creator>
      <pubDate>Sat, 11 Feb 2023 07:36:35 +0000</pubDate>
      <link>https://dev.to/aws-heroes/how-do-i-become-an-aws-hero-39n9</link>
      <guid>https://dev.to/aws-heroes/how-do-i-become-an-aws-hero-39n9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwq73k56taza84006n1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwq73k56taza84006n1w.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Becoming an AWS Hero is a recognition awarded by Amazon Web Services (AWS) to individuals who have demonstrated a deep understanding of the AWS platform and have made significant contributions to the AWS community.&lt;/p&gt;

&lt;p&gt;The AWS Heroes program recognises a vibrant, worldwide group of AWS experts whose enthusiasm for knowledge-sharing has a real impact within the community. Heroes go above and beyond to share knowledge in a variety of ways including online via social media, blog posts, open source projects, videos, and forums; or in person at conferences, workshops, and user group events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p6ycplyyec0qxsec9ar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p6ycplyyec0qxsec9ar.png" alt="Image description" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm always active in dev communities, but, I began my journey a few years ago by actively participating in the AWS community, sharing my knowledge with others and learning from them (for example, through technical blogs, as a speaker, and organising AWS immersion days and game days, to name a few). When I heard about the AWS Community Builders program, I applied and got selected, and I haven't looked back since then. The support I received from AWS (community managers and members) is phenomenal, and that helped me come out of my shell, recognise and invest in my own development, bring more visibility to what I do, share my knowledge with the wider AWS community, and achieve my goal. Today, I'm proud to say that I'm an AWS Hero. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is the AWS Heroes Program?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;AWS is comprised of users whose enthusiasm has created a vibrant worldwide community of experts who go above and beyond to share their extensive AWS knowledge in many ways, including through social media, blogs, events, user groups, and workshops.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The purpose of the AWS Heroes program is to recognise and honour the most engaged and influential developers who have a significant impact within the community. It also provides Heroes a place to tell their story and connect with like-minded developers. If you are looking to learn from and connect with experienced members of the AWS community, check out their bios and connect online.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bax8pd933xy6b7mb6k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bax8pd933xy6b7mb6k5.png" alt="Image description" width="361" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Discovering the hero within is a journey that cannot be accomplished through a one-size-fits-all approach.&lt;/strong&gt;&lt;/em&gt; However, I’ve got some valuable tips that I personally used to unleash my inner hero. Keep reading to learn more and &lt;em&gt;&lt;strong&gt;embark on your own hero’s journey!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Develop your AWS expertise:&lt;/strong&gt; Gain a deep understanding of the AWS services and technologies through hands-on experience, attending AWS events and webinars, and reading AWS documentation and best practices. (&lt;a href="https://aws.amazon.com/products" rel="noopener noreferrer"&gt;https://aws.amazon.com/products&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engage with the AWS community:&lt;/strong&gt; Participate in AWS user groups, forums, and social media, and help others with their AWS-related questions and challenges. (&lt;a href="https://aws.amazon.com/developer/community" rel="noopener noreferrer"&gt;https://aws.amazon.com/developer/community&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Share your expertise:&lt;/strong&gt; Write blog posts, present at AWS events and meetups, and create educational resources to share your AWS knowledge with others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get certified:&lt;/strong&gt; Obtain one or more AWS certifications, such as the AWS Certified Solutions Architect, to demonstrate your technical expertise. (&lt;a href="https://aws.amazon.com/certification" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attend AWS events:&lt;/strong&gt; Attend AWS re:Invent, AWS Summit, and other AWS events to network with other AWS experts and learn about the latest AWS developments. (&lt;a href="https://aws.amazon.com/events" rel="noopener noreferrer"&gt;https://aws.amazon.com/events&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborate with others:&lt;/strong&gt; Work with other AWS experts on projects and initiatives to build your network and showcase your expertise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seek recognition:&lt;/strong&gt; Apply for the AWS Community Builder program to receive recognition for your contributions to the AWS community.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You can learn more about AWS Community Builder Program: &lt;a href="https://aws.amazon.com/developer/community/community-builders" rel="noopener noreferrer"&gt;https://aws.amazon.com/developer/community/community-builders&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgz8xlkji1qyf9icjv1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgz8xlkji1qyf9icjv1a.png" alt="Image description" width="768" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Focus on a specific area of AWS:&lt;/strong&gt; While it's important to have a broad understanding of AWS, focusing on a specific area can help you become an expert in that area and make a bigger impact in the community.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engage with the AWS open-source community:&lt;/strong&gt; Participate in AWS open-source projects, contribute code, and share your contributions on your personal blog or social media. (&lt;a href="https://aws.amazon.com/opensource" rel="noopener noreferrer"&gt;https://aws.amazon.com/opensource&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stay up-to-date:&lt;/strong&gt; Stay informed of the latest AWS news and developments by following the AWS blog, attending webinars, and reading industry publications. (&lt;a href="https://aws.amazon.com/blogs/aws" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Participate in online forums:&lt;/strong&gt; Engage with other AWS users and experts by answering questions, providing solutions, and sharing your expertise in online forums such as the AWS Developer Forums or the AWS Community Heroes program. (&lt;a href="https://repost.aws" rel="noopener noreferrer"&gt;https://repost.aws&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaborate with AWS partners:&lt;/strong&gt; Partner with AWS partners, such as AWS solution providers, to learn more about AWS solutions and help others. (&lt;a href="https://partners.amazonaws.com" rel="noopener noreferrer"&gt;https://partners.amazonaws.com&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teach others:&lt;/strong&gt; Organise and lead AWS training sessions, workshops, and meetups to share your knowledge with others and help them grow their AWS expertise. (&lt;a href="https://www.aws.training" rel="noopener noreferrer"&gt;https://www.aws.training&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seek feedback:&lt;/strong&gt; Ask for feedback from others in the AWS community to learn what you can do better and to get suggestions on areas where you can improve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In conclusion, becoming an AWS Hero requires a combination of technical expertise, community engagement, and sharing your knowledge with others. By following these suggestions, you can create value for yourself, demonstrate your expertise in AWS and make valuable contributions to the AWS community.&lt;/p&gt;

&lt;p&gt;"Good luck on your journey to becoming an AWS Hero!"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer: This blog is meant only for guidance and my own opinions based on my experience, not as a definitive path to becoming an AWS Hero. I believe that every individual has unique talent and has their own ways to work on their self-development skills.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsjk7vmvahnqj26mfdbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsjk7vmvahnqj26mfdbm.png" alt="Image description" width="650" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For more information on AWS Hero programme: &lt;a href="https://aws.amazon.com/developer/community/heroes" rel="noopener noreferrer"&gt;https://aws.amazon.com/developer/community/heroes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Follow me: &lt;a href="https://www.linkedin.com/in/cloudgeek7" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/cloudgeek7&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>vibecoding</category>
    </item>
  </channel>
</rss>
