<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Samuel Vazquez</title>
    <description>The latest articles on DEV Community by Samuel Vazquez (@codexmaker).</description>
    <link>https://dev.to/codexmaker</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/codexmaker"/>
    <language>en</language>
    <item>
      <title>AWS Lambda Strategies - Auto-Calling Like a Pro 🚀🔥</title>
      <dc:creator>Samuel Vazquez</dc:creator>
      <pubDate>Wed, 09 Apr 2025 17:08:33 +0000</pubDate>
      <link>https://dev.to/codexmaker/aws-lambda-strategies-auto-calling-like-a-pro-3fep</link>
      <guid>https://dev.to/codexmaker/aws-lambda-strategies-auto-calling-like-a-pro-3fep</guid>
      <description>&lt;p&gt;Tired of manually handling batch processing in AWS Lambda? Wanna scale like a boss without worrying about timeouts? Let’s talk about auto-calling Lambda functions and how they can supercharge your workflows! 💪&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;When dealing with large datasets in Lambda, processing everything in one go can lead to timeouts and memory issues. We need a better way—one that leverages asynchronous execution and auto-invocation to scale effortlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;Imagine you have a list of resources in your database, and each needs AI-driven analysis, file complex generation or a high math effort. Running all of them in a single function execution? Not ideal. Instead, we’ll trigger the function recursively, allowing it to handle each resource independently, in parallel. No more memory bloat, no more slow executions. Just pure efficiency. 🫡&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Sample:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Lambda name autocalling-lambda-sample
import json
import boto3
import random

def get_waifu_list():
    return ["Asuna", "Rem", "Hinata", "Zero Two", "Nezuko", "Mikasa", "Kurisu"]

def custom_ai_analysis(name):
    personality_traits = {
        "Asuna": "Brave, Loyal, Strong-willed",
        "Rem": "Devoted, Kind, Fierce when needed",
        "Hinata": "Shy, Gentle, Loving",
        "Zero Two": "Mysterious, Playful, Protective",
        "Nezuko": "Cute, Silent, Deadly when angered",
        "Mikasa": "Stoic, Determined, Fierce",
        "Kurisu": "Smart, Witty, Tsundere"
    }
    return {"name": name, "personality": personality_traits.get(name, "Unknown")}

def lambda_handler(event, context):
    lambda_client = boto3.client('lambda')
    function_name = context.function_name
    waifu_list = get_waifu_list()

    if 'waifu_index' in event:
        waifu_name = waifu_list[event['waifu_index']]
        analysis_result = custom_ai_analysis(waifu_name)
        print(f"Processed: {json.dumps(analysis_result)}")
        return {'statusCode': 200, 'body': json.dumps(analysis_result)}
    else:
        for i in range(len(waifu_list)):
            lambda_client.invoke(
                FunctionName=function_name,
                InvocationType='Event',  # Asynchronous invocation
                Payload=json.dumps({'waifu_index': i})
            )
            print(f"Triggered async invocation for waifu {waifu_list[i]}")

        return {'statusCode': 200, 'body': json.dumps('Waifu analysis triggered!')}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Don forget:
&lt;/h2&gt;

&lt;p&gt;Adding to your lambda the rights to auto call him with a custom policy like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou0hcc8tjzfgy8hp2pbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou0hcc8tjzfgy8hp2pbc.png" alt="Image description" width="536" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhswq6rqsg5pl267lu5lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhswq6rqsg5pl267lu5lg.png" alt="Image description" width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug:
&lt;/h2&gt;

&lt;p&gt;Enable your CloudWatch logs to see how your lambda was called n times and enjoy !&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Awesome 😎
&lt;/h2&gt;

&lt;p&gt;✨ Scales Automatically – Each invocation only processes a single resource, preventing timeouts.&lt;br&gt;
✨ Parallel Execution – Multiple Lambda instances run at the same time.&lt;br&gt;
✨ Cost-Efficient – Pay only for what you use, avoiding unnecessary execution time.&lt;br&gt;
✨ Works on Any Dataset – Whether you have 10 or 10 million resources, this approach just works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And That's It!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💬 Got questions or improvements? Drop a comment below!&lt;br&gt;
🔥 If this helped you, give it a ❤️ and share it with other Developers!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 Connect With Me!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/codexmaker/" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
📂 GitHub: &lt;a href="https://github.com/CodeXMakerCompany" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
🎥 YouTube: &lt;a href="https://www.youtube.com/@codexmaker4568" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have questions or improvements? Drop a comment below! 🚀🔥&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Lambda Layers with Docker Like a Pro 🥵</title>
      <dc:creator>Samuel Vazquez</dc:creator>
      <pubDate>Thu, 03 Apr 2025 23:21:48 +0000</pubDate>
      <link>https://dev.to/codexmaker/aws-lambda-layers-with-docker-like-a-pro-1i6o</link>
      <guid>https://dev.to/codexmaker/aws-lambda-layers-with-docker-like-a-pro-1i6o</guid>
      <description>&lt;p&gt;Tired of "No module found" errors in your Lambda functions? Architecture mismatches got you down? Let me show you how to build Lambda layers the right way! 💪&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Building Lambda layers locally often leads to compatibility issues, especially when your development machine doesn't match your Lambda's architecture (x86_64 vs ARM64).&lt;/p&gt;

&lt;h2&gt;
  
  
  Our use case and why this matters
&lt;/h2&gt;

&lt;p&gt;For this example lets study a bit the Langfuse library for python if you are a Windows user and you try to generate this in your local independent if is a virtual environment or standard, once you generate the zip this will crash once you upload it as a layer and link it with your lambda due the pydantic_core.&lt;/p&gt;

&lt;p&gt;This approach ensures that compiled modules (like pydantic_core) are built for the exact architecture your Lambda uses, eliminating those frustrating "no module found" errors.&lt;/p&gt;

&lt;p&gt;Lets avoid the arch mistmatches forever 🫧💗✨&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I created this simple Docker Compose setup that builds Lambda layers with the exact same environment as AWS Lambda:&lt;/p&gt;

&lt;p&gt;AWS Official images link : (&lt;a href="https://gallery.ecr.aws/sam/" rel="noopener noreferrer"&gt;https://gallery.ecr.aws/sam/&lt;/a&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  lambda-builder:
    image: public.ecr.aws/sam/build-python3.12:latest-arm64 
    volumes:
      - .:/var/task
    command: &amp;gt;
      bash -c "
        echo 'Building Lambda dependencies for ARM64...' &amp;amp;&amp;amp;
        # Install dependencies directly in the root directory where Lambda will look for them
        pip install langfuse -t /var/task/package/ &amp;amp;&amp;amp;
        cd /var/task/package &amp;amp;&amp;amp;
        zip -r /var/task/deployment-package.zip . &amp;amp;&amp;amp;
        cd /var/task &amp;amp;&amp;amp;
        if [ -f 'lambda_function.py' ]; then
          zip -g /var/task/deployment-package.zip lambda_function.py; 
        else
          echo 'Warning: lambda_function.py not found in the current directory';
        fi &amp;amp;&amp;amp;
        if [ -f '.env' ]; then
          zip -g /var/task/deployment-package.zip .env;
        fi &amp;amp;&amp;amp;
        echo 'Deployment package created at: /var/task/deployment-package.zip' &amp;amp;&amp;amp;
        ls -la /var/task/deployment-package.zip
      "
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Use It
&lt;/h2&gt;

&lt;p&gt;1️⃣ Save the Docker Compose file in your project folder&lt;br&gt;
Create a new directory with the above docker-compose.yml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir lambda-layers-generator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Run the build process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Upload the generated deployment package&lt;br&gt;
The output file 'deployment-package.zip' will be in your project directory, ready to upload to AWS Lambda!&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually Happening?
&lt;/h2&gt;

&lt;p&gt;Uses AWS's official Lambda container image (identical runtime environment)&lt;br&gt;
Installs dependencies directly at the root level (where Lambda expects them)&lt;br&gt;
Creates a properly structured ZIP package with all necessary files&lt;br&gt;
Works for both ARM64 and x86_64 (just change the image tag)&lt;br&gt;
No more "but it works on my machine" moments! 😅&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And That's It!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💬 Got questions or improvements? Drop a comment below!&lt;br&gt;
🔥 If this helped you, give it a ❤️ and share it with other Developers!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 Connect With Me!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/codexmaker/" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
📂 GitHub: &lt;a href="https://github.com/CodeXMakerCompany" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
🎥 YouTube: &lt;a href="https://www.youtube.com/@codexmaker4568" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have questions or improvements? Drop a comment below! 🚀🔥&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS #Lambda #DevOps #Python #CloudComputing #Docker
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>layer</category>
      <category>docker</category>
      <category>programming</category>
    </item>
    <item>
      <title>Move Your Hugging Face LLM to S3 Like a Pro (Without Wasting Local Space!) 🚀</title>
      <dc:creator>Samuel Vazquez</dc:creator>
      <pubDate>Sat, 08 Mar 2025 01:56:26 +0000</pubDate>
      <link>https://dev.to/codexmaker/move-your-hugging-face-llm-to-s3-like-a-pro-without-wasting-local-space-15kp</link>
      <guid>https://dev.to/codexmaker/move-your-hugging-face-llm-to-s3-like-a-pro-without-wasting-local-space-15kp</guid>
      <description>&lt;p&gt;So you've got this huge LLM (we're talking 100+ GB of pure AI magic), and you need to get it into AWS S3 without clogging up your disk?&lt;/p&gt;

&lt;p&gt;Let's do it the smart way—download, upload, and free up your space in one go! 💪&lt;/p&gt;

&lt;p&gt;Best part? This guide works inside AWS SageMaker (Using a Jupyter Notebook) or your local machine with AWS CLI configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔧 Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;br&gt;
👉 AWS CLI configured (if running locally)&lt;br&gt;
👉 SageMaker Notebook (if using AWS)&lt;br&gt;
👉 Hugging Face &amp;amp; Boto3 installed:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;!pip install huggingface_hub boto3&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then, import the necessary libraries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import boto3
from huggingface_hub import hf_hub_download
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🎯 The Mission
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Download model files from Hugging Face.&lt;/li&gt;
&lt;li&gt;Upload them to an AWS S3 bucket.&lt;/li&gt;
&lt;li&gt;Free up local storage by removing the files after upload.&lt;/li&gt;
&lt;li&gt;Clean up Hugging Face cache to reclaim disk space.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📥 Step 1: Set Up S3 and Model Paths
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3_client = boto3.client('s3')

# CHANGE this to your bucket
BUCKET_NAME = 'your-s3-bucket-name'  

# CHANGE this to your bucket
MODEL_PATH = "deepseek" 

# Make sure you have enough space!
SAVE_DIR = "/home/sagemaker-user/"  

# CHANGE from your selected model at https://huggingface.co/
repo_id = "deepseek-ai/DeepSeek-R1-Distill-Llama-70B"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We've listed all the files to grab from Hugging Face:&lt;/p&gt;

&lt;p&gt;List all your transformers and the required data the LLM needs, in this case each transformer goes around 9GB * 17 resources this model have is a total of 153GB not as good for a single file download&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model_files = [
    "model-00001-of-000017.safetensors", 
    "model-00002-of-000017.safetensors", ...
    "tokenizer.json", 
    "tokenizer_config.json"
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  📄 Step 2: Upload to S3
&lt;/h2&gt;

&lt;p&gt;Define a function to upload and other to delete files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def upload_to_s3(local_file_path, s3_key):
    s3_client.upload_file(local_file_path, BUCKET_NAME, s3_key)
    print(f"Uploaded {local_file_path} to S3 bucket {BUCKET_NAME} with key {s3_key}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def remove_local_file(file_path):
    try:
        os.remove(file_path)
        print(f"Removed local file {file_path}")
    except Exception as e:
        print(f"Error removing file {file_path}: {str(e)}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, download, upload, and delete locally each file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for file_name in model_files:
    local_file_path = hf_hub_download(repo_id=repo_id, filename=file_name, local_dir=SAVE_DIR)
    print(f"Downloaded {file_name} to {local_file_path}")

    s3_key = f"models/deepseek/{file_name}"
    upload_to_s3(local_file_path, s3_key)

    remove_local_file(local_file_path)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🗑 Extra Step: Clear Hugging Face Cache and debug your storage
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import shutil

def clear_huggingface_cache(cache_dir):
    """Clear Hugging Face cache directory"""
    try:
        shutil.rmtree(cache_dir)
        print(f"Cleared Hugging Face cache at {cache_dir}")
    except Exception as e:
        print(f"Error clearing cache: {str(e)}")

clear_huggingface_cache("/home/sagemaker-user/.cache/huggingface")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def check_disk_space(directory):
    """Log disk space for a specific directory"""
    total, used, free = shutil.disk_usage(directory)
    print(f"Disk space for {directory} - Total: {total} GB, Used: {used} GB, Free: {free} GB")

check_disk_space("/home/sagemaker-user/.cache")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;And That's It!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✅ Your Model is now safely stored in S3.&lt;br&gt;
✅ Your local disk is free of clutter.&lt;br&gt;
✅ You're ready to deploy or fine-tune without worrying about storage!🚀&lt;/p&gt;

&lt;p&gt;💬 Got questions or improvements? Drop a comment below!&lt;br&gt;
🔥 If this helped you, give it a ❤️ and share it with other LLM builders!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 Connect With Me!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;💼 LinkedIn: &lt;a href="https://www.linkedin.com/in/codexmaker/" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
📂 GitHub: &lt;a href="https://github.com/CodeXMakerCompany" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;br&gt;
🎥 YouTube: &lt;a href="https://www.youtube.com/@codexmaker4568" rel="noopener noreferrer"&gt;CodexMaker&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have questions or improvements? Drop a comment below! 🚀🔥&lt;/p&gt;

&lt;p&gt;"Remember AI is poetry, words become numbers, numbers will shape the &lt;br&gt;
   future"&lt;br&gt;
  Codexmaker&lt;/p&gt;

</description>
      <category>bedrock</category>
      <category>awsbedrock</category>
      <category>aws</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
