<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: parmarjatin4911@gmail.com</title>
    <description>The latest articles on DEV Community by parmarjatin4911@gmail.com (@jhparmar).</description>
    <link>https://dev.to/jhparmar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jhparmar"/>
    <language>en</language>
    <item>
      <title>Deploy OpenClaw on AWS EC2 with Amazon Bedrock Using Terraform — No API Keys Required</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Sun, 15 Mar 2026 17:28:53 +0000</pubDate>
      <link>https://dev.to/jhparmar/deploy-openclaw-on-aws-ec2-with-amazon-bedrock-using-terraform-no-api-keys-required-4gg4</link>
      <guid>https://dev.to/jhparmar/deploy-openclaw-on-aws-ec2-with-amazon-bedrock-using-terraform-no-api-keys-required-4gg4</guid>
      <description>&lt;p&gt;Deploy OpenClaw on AWS EC2 with Amazon Bedrock Using Terraform — No API Keys Required&lt;br&gt;
TL;DR: A complete Terraform setup to deploy OpenClaw — the open-source multi-channel AI gateway — on an EC2 instance, wired to Amazon Bedrock's Claude models via IAM instance roles. Zero API key management. Full CI/CD pipeline included.&lt;/p&gt;

&lt;p&gt;Why This Matters&lt;br&gt;
If you've ever wanted a personal AI coding assistant you can message from WhatsApp, Telegram, or Discord — without relying on a hosted service — OpenClaw is the answer. It's an open-source, self-hosted gateway that bridges your chat apps to AI agents.&lt;/p&gt;

&lt;p&gt;The typical setup requires managing API keys for your LLM provider. But when you run on AWS, there's a cleaner approach: Amazon Bedrock with IAM instance roles. The EC2 instance authenticates to Bedrock automatically through instance metadata — no secrets to rotate, no environment variables to leak.&lt;/p&gt;

&lt;p&gt;This article walks through a Terraform project that automates the entire deployment, from VPC security groups to a running OpenClaw gateway, including a GitHub Actions CI/CD pipeline with OIDC authentication.&lt;/p&gt;

&lt;p&gt;Repository: github.com/Yash-Kavaiya/openclaw-bedrock-ec2&lt;/p&gt;

&lt;p&gt;What is OpenClaw?&lt;br&gt;
OpenClaw is a self-hosted gateway that connects chat applications — WhatsApp, Telegram, Discord, iMessage, and others — to AI coding agents. You run a single Gateway process on your machine or server, and it becomes the bridge between your messaging apps and an always-available AI assistant.&lt;/p&gt;

&lt;p&gt;Key characteristics:&lt;/p&gt;

&lt;p&gt;Self-hosted: Runs on your hardware, under your control&lt;br&gt;
Multi-channel: One Gateway serves WhatsApp, Telegram, Discord simultaneously&lt;br&gt;
Agent-native: Built for coding agents with tool use, sessions, memory, and multi-agent routing&lt;br&gt;
Open source: MIT licensed&lt;br&gt;
The Gateway listens on port 18789 and handles session management, routing, and channel connections.&lt;/p&gt;

&lt;p&gt;Architecture Overview&lt;br&gt;
text&lt;/p&gt;

&lt;p&gt;┌─────────────────────────────────────────────┐&lt;br&gt;
│  EC2 (Amazon Linux 2023, t3.small)          │&lt;br&gt;
│                                             │&lt;br&gt;
│  ┌─────────────────────────────────────┐    │&lt;br&gt;
│  │  OpenClaw Gateway  :18789           │    │&lt;br&gt;
│  │  Primary: Claude Sonnet 4.6         │    │&lt;br&gt;
│  │  Fallback: Haiku 3.5 → Opus 4.6    │    │&lt;br&gt;
│  └──────────────┬──────────────────────┘    │&lt;br&gt;
│                 │ IAM Instance Role          │&lt;br&gt;
└─────────────────┼───────────────────────────┘&lt;br&gt;
                  │&lt;br&gt;
          Amazon Bedrock (us-east-1)&lt;br&gt;
          AmazonBedrockFullAccess&lt;br&gt;
The setup provisions:&lt;/p&gt;

&lt;p&gt;An EC2 instance running Amazon Linux 2023 on t3.small&lt;br&gt;
An IAM instance profile with AmazonBedrockFullAccess&lt;br&gt;
A security group allowing SSH (22), HTTP (80), HTTPS (443), and the OpenClaw gateway port (18789)&lt;br&gt;
A user data script that bootstraps Node.js 22, installs OpenClaw, writes the config, and starts it as a systemd service&lt;br&gt;
A templated OpenClaw config (openclaw.json) that registers three Bedrock Claude models with automatic discovery enabled&lt;br&gt;
Prerequisites&lt;br&gt;
Before you begin, ensure you have:&lt;/p&gt;

&lt;p&gt;Terraform &amp;gt;= 1.5 installed (install guide)&lt;br&gt;
AWS CLI configured with credentials (aws configure)&lt;br&gt;
An EC2 key pair created in your AWS account&lt;br&gt;
Bedrock model access enabled in your AWS region&lt;br&gt;
Navigate to: AWS Console → Bedrock → Model access → Enable Claude models&lt;br&gt;
The Terraform Code, Explained&lt;br&gt;
Project Structure&lt;br&gt;
text&lt;/p&gt;

&lt;p&gt;openclaw-bedrock-ec2/&lt;br&gt;
├── main.tf                    # Provider, data sources, SG, EC2 instance&lt;br&gt;
├── iam.tf                     # IAM role + instance profile for Bedrock&lt;br&gt;
├── variables.tf               # Input variables&lt;br&gt;
├── outputs.tf                 # Gateway URL and SSH command outputs&lt;br&gt;
├── versions.tf                # Terraform + provider version constraints&lt;br&gt;
├── openclaw.json.tpl          # OpenClaw config template&lt;br&gt;
├── user_data.sh.tpl           # EC2 bootstrap script template&lt;br&gt;
├── terraform.tfvars.example   # Example variables file&lt;br&gt;
├── github-oidc.tf             # GitHub Actions OIDC role&lt;br&gt;
└── .github/workflows/&lt;br&gt;
    └── terraform.yml          # CI/CD pipeline&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;IAM: Keyless Bedrock Access
The core idea is using an IAM instance role instead of API keys. The iam.tf file creates:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;hcl&lt;/p&gt;

&lt;p&gt;resource "aws_iam_role" "openclaw_bedrock" {&lt;br&gt;
  name = "openclaw-bedrock-ec2-role"&lt;/p&gt;

&lt;p&gt;assume_role_policy = jsonencode({&lt;br&gt;
    Version = "2012-10-17"&lt;br&gt;
    Statement = [&lt;br&gt;
      {&lt;br&gt;
        Effect    = "Allow"&lt;br&gt;
        Principal = { Service = "ec2.amazonaws.com" }&lt;br&gt;
        Action    = "sts:AssumeRole"&lt;br&gt;
      }&lt;br&gt;
    ]&lt;br&gt;
  })&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_iam_role_policy_attachment" "bedrock_full_access" {&lt;br&gt;
  role       = aws_iam_role.openclaw_bedrock.name&lt;br&gt;
  policy_arn = "arn:aws:iam::aws:policy/AmazonBedrockFullAccess"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_iam_instance_profile" "openclaw_bedrock" {&lt;br&gt;
  name = "openclaw-bedrock-ec2-profile"&lt;br&gt;
  role = aws_iam_role.openclaw_bedrock.name&lt;br&gt;
}&lt;br&gt;
The managed policy AmazonBedrockFullAccess grants:&lt;/p&gt;

&lt;p&gt;bedrock:InvokeModel&lt;br&gt;
bedrock:InvokeModelWithResponseStream&lt;br&gt;
bedrock:ListFoundationModels&lt;br&gt;
OpenClaw's config uses "auth": "aws-sdk", which automatically picks up the instance metadata credentials. No keys in environment variables, no secrets to manage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;OpenClaw Configuration Template
The openclaw.json.tpl template wires up three Claude models through Bedrock:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;json&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
  "models": {&lt;br&gt;
    "mode": "merge",&lt;br&gt;
    "bedrockDiscovery": {&lt;br&gt;
      "enabled": true,&lt;br&gt;
      "region": "${region}",&lt;br&gt;
      "providerFilter": ["anthropic"],&lt;br&gt;
      "refreshInterval": 3600,&lt;br&gt;
      "defaultContextWindow": 200000,&lt;br&gt;
      "defaultMaxTokens": 8192&lt;br&gt;
    },&lt;br&gt;
    "providers": {&lt;br&gt;
      "amazon-bedrock": {&lt;br&gt;
        "baseUrl": "&lt;a href="https://bedrock-runtime.$%7Bregion%7D.amazonaws.com" rel="noopener noreferrer"&gt;https://bedrock-runtime.${region}.amazonaws.com&lt;/a&gt;",&lt;br&gt;
        "api": "bedrock-converse-stream",&lt;br&gt;
        "auth": "aws-sdk",&lt;br&gt;
        "models": [&lt;br&gt;
          {&lt;br&gt;
            "id": "us.anthropic.claude-sonnet-4-6-v1:0",&lt;br&gt;
            "name": "Claude Sonnet 4.6 (Bedrock)",&lt;br&gt;
            "reasoning": false,&lt;br&gt;
            "contextWindow": 200000,&lt;br&gt;
            "maxTokens": 8192&lt;br&gt;
          },&lt;br&gt;
          {&lt;br&gt;
            "id": "us.anthropic.claude-haiku-3-5-v1:0",&lt;br&gt;
            "name": "Claude Haiku 3.5 (Bedrock)",&lt;br&gt;
            "reasoning": false&lt;br&gt;
          },&lt;br&gt;
          {&lt;br&gt;
            "id": "us.anthropic.claude-opus-4-6-v1:0",&lt;br&gt;
            "name": "Claude Opus 4.6 (Bedrock)",&lt;br&gt;
            "reasoning": true&lt;br&gt;
          }&lt;br&gt;
        ]&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  },&lt;br&gt;
  "agents": {&lt;br&gt;
    "defaults": {&lt;br&gt;
      "model": {&lt;br&gt;
        "primary": "amazon-bedrock/us.anthropic.claude-sonnet-4-6-v1:0",&lt;br&gt;
        "fallback": [&lt;br&gt;
          "amazon-bedrock/us.anthropic.claude-haiku-3-5-v1:0",&lt;br&gt;
          "amazon-bedrock/us.anthropic.claude-opus-4-6-v1:0"&lt;br&gt;
        ]&lt;br&gt;
      }&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;br&gt;
Notable design choices:&lt;/p&gt;

&lt;p&gt;Primary model: Claude Sonnet 4.6 — strong balance of capability and speed&lt;br&gt;
Fallback chain: Haiku 3.5 (fast/cheap) → Opus 4.6 (most capable, with reasoning)&lt;br&gt;
Auto-discovery: bedrockDiscovery.enabled = true means any additional Anthropic models you enable in Bedrock are automatically available&lt;br&gt;
Cost fields set to 0: Bedrock billing is handled by AWS, not by token-based API keys&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;EC2 Bootstrap Script
The user_data.sh.tpl script runs on first boot:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;h1&gt;
  
  
  !/bin/bash
&lt;/h1&gt;

&lt;p&gt;set -euo pipefail&lt;br&gt;
exec &amp;gt; /var/log/openclaw-bootstrap.log 2&amp;gt;&amp;amp;1&lt;/p&gt;

&lt;h1&gt;
  
  
  1. System update
&lt;/h1&gt;

&lt;p&gt;dnf update -y&lt;/p&gt;

&lt;h1&gt;
  
  
  2. Install Node.js 22
&lt;/h1&gt;

&lt;p&gt;curl -fsSL &lt;a href="https://rpm.nodesource.com/setup_22.x" rel="noopener noreferrer"&gt;https://rpm.nodesource.com/setup_22.x&lt;/a&gt; | bash -&lt;br&gt;
dnf install -y nodejs&lt;/p&gt;

&lt;h1&gt;
  
  
  3. Install OpenClaw globally
&lt;/h1&gt;

&lt;p&gt;npm install -g openclaw&lt;/p&gt;

&lt;h1&gt;
  
  
  4. Write config to ~/.openclaw/openclaw.json
&lt;/h1&gt;

&lt;p&gt;mkdir -p /root/.openclaw&lt;br&gt;
cat &amp;gt; /root/.openclaw/openclaw.json &amp;lt;&amp;lt; 'OPENCLAW_CONFIG'&lt;br&gt;
${openclaw_config}&lt;br&gt;
OPENCLAW_CONFIG&lt;/p&gt;

&lt;h1&gt;
  
  
  5. Create systemd service and start
&lt;/h1&gt;

&lt;p&gt;systemctl daemon-reload&lt;br&gt;
systemctl enable openclaw&lt;br&gt;
systemctl start openclaw&lt;br&gt;
The script creates a proper systemd service so OpenClaw:&lt;/p&gt;

&lt;p&gt;Starts automatically on boot&lt;br&gt;
Restarts on failure (with a 10-second backoff)&lt;br&gt;
Logs to journald (viewable via journalctl -u openclaw -f)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Variables&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Variable    Required    Default Description&lt;br&gt;
key_name&lt;/p&gt;

&lt;p&gt;Yes&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;EC2 key pair name&lt;/p&gt;

&lt;p&gt;openclaw_gateway_token&lt;/p&gt;

&lt;p&gt;Yes&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;Secret auth token for the gateway&lt;/p&gt;

&lt;p&gt;region&lt;/p&gt;

&lt;p&gt;No&lt;/p&gt;

&lt;p&gt;us-east-1&lt;/p&gt;

&lt;p&gt;AWS region&lt;/p&gt;

&lt;p&gt;instance_type&lt;/p&gt;

&lt;p&gt;No&lt;/p&gt;

&lt;p&gt;t3.small&lt;/p&gt;

&lt;p&gt;EC2 instance type&lt;/p&gt;

&lt;p&gt;ami_id&lt;/p&gt;

&lt;p&gt;No&lt;/p&gt;

&lt;p&gt;Latest AL2023&lt;/p&gt;

&lt;p&gt;Override AMI&lt;/p&gt;

&lt;p&gt;allowed_ssh_cidr&lt;/p&gt;

&lt;p&gt;No&lt;/p&gt;

&lt;p&gt;0.0.0.0/0&lt;/p&gt;

&lt;p&gt;Restrict SSH access&lt;/p&gt;

&lt;p&gt;aws_bearer_token_bedrock&lt;/p&gt;

&lt;p&gt;No&lt;/p&gt;

&lt;p&gt;""&lt;/p&gt;

&lt;p&gt;Bearer token (alternative auth)&lt;/p&gt;

&lt;p&gt;Step-by-Step Deployment&lt;br&gt;
Step 1: Clone the Repository&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;git clone &lt;a href="https://github.com/Yash-Kavaiya/openclaw-bedrock-ec2" rel="noopener noreferrer"&gt;https://github.com/Yash-Kavaiya/openclaw-bedrock-ec2&lt;/a&gt;&lt;br&gt;
cd openclaw-bedrock-ec2&lt;br&gt;
Step 2: Configure Variables&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;cp terraform.tfvars.example terraform.tfvars&lt;br&gt;
Edit terraform.tfvars:&lt;/p&gt;

&lt;p&gt;hcl&lt;/p&gt;

&lt;p&gt;key_name               = "my-ec2-keypair"&lt;br&gt;
openclaw_gateway_token = "a-strong-random-token-here"&lt;br&gt;
region                 = "us-east-1"&lt;br&gt;
Important: Never commit terraform.tfvars — it's already in .gitignore.&lt;/p&gt;

&lt;p&gt;Step 3: Initialize and Deploy&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;terraform init&lt;br&gt;
terraform plan    # Review what will be created&lt;br&gt;
terraform apply   # Type 'yes' to confirm&lt;br&gt;
Step 4: Connect&lt;br&gt;
After terraform apply completes, the outputs show:&lt;/p&gt;

&lt;p&gt;text&lt;/p&gt;

&lt;p&gt;openclaw_gateway_url = http://:18789&lt;br&gt;
ssh_command          = ssh -i ~/.ssh/my-key.pem ec2-user@&lt;br&gt;
Configure the OpenClaw CLI to point at your gateway:&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;openclaw config set gateway.url http://:18789&lt;br&gt;
openclaw config set gateway.auth.token &lt;br&gt;
openclaw status&lt;br&gt;
Or monitor the bootstrap via SSH:&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;ssh -i ~/.ssh/my-key.pem ec2-user@&lt;br&gt;
sudo tail -f /var/log/openclaw-bootstrap.log&lt;br&gt;
sudo journalctl -u openclaw -f&lt;br&gt;
CI/CD Pipeline with GitHub Actions&lt;br&gt;
The repository includes a GitHub Actions workflow (.github/workflows/terraform.yml) that automates the entire lifecycle using OIDC authentication — no long-lived AWS access keys stored in GitHub.&lt;/p&gt;

&lt;p&gt;Pipeline Behavior&lt;/p&gt;

&lt;p&gt;Trigger What Happens&lt;br&gt;
Pull Request&lt;/p&gt;

&lt;p&gt;Lint + Validate → Plan (posted as PR comment)&lt;/p&gt;

&lt;p&gt;Push to master&lt;/p&gt;

&lt;p&gt;Lint + Validate → Manual approval → Apply&lt;/p&gt;

&lt;p&gt;workflow_dispatch&lt;/p&gt;

&lt;p&gt;Manual approval → Destroy&lt;/p&gt;

&lt;p&gt;How OIDC Auth Works&lt;br&gt;
Instead of storing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as GitHub secrets, the pipeline uses OpenID Connect:&lt;/p&gt;

&lt;p&gt;GitHub Actions requests a short-lived JWT from GitHub's OIDC provider&lt;br&gt;
AWS STS exchanges that JWT for temporary AWS credentials&lt;br&gt;
The temporary credentials are scoped to only the permissions the IAM role allows&lt;br&gt;
This is set up via github-oidc.tf, which creates an IAM OIDC provider and a role that trusts GitHub's identity provider.&lt;/p&gt;

&lt;p&gt;Setup Steps&lt;br&gt;
Bootstrap the OIDC role from your local machine:&lt;br&gt;
bash&lt;/p&gt;

&lt;p&gt;terraform apply \&lt;br&gt;
  -target=aws_iam_openid_connect_provider.github \&lt;br&gt;
  -target=aws_iam_role.github_actions \&lt;br&gt;
  -target=aws_iam_role_policy.github_actions_terraform \&lt;br&gt;
  -var="key_name=any" \&lt;br&gt;
  -var="openclaw_gateway_token=placeholder"&lt;/p&gt;

&lt;p&gt;terraform output github_actions_role_arn&lt;br&gt;
Add GitHub Secrets (repo → Settings → Secrets → Actions):&lt;/p&gt;

&lt;p&gt;Secret  Value&lt;br&gt;
AWS_ROLE_ARN&lt;/p&gt;

&lt;p&gt;ARN from step 1&lt;/p&gt;

&lt;p&gt;TF_KEY_NAME&lt;/p&gt;

&lt;p&gt;Your EC2 key pair name&lt;/p&gt;

&lt;p&gt;OPENCLAW_GATEWAY_TOKEN&lt;/p&gt;

&lt;p&gt;Your gateway auth token&lt;/p&gt;

&lt;p&gt;Create a production environment (repo → Settings → Environments) and add yourself as a required reviewer. This gates the apply and destroy jobs behind manual approval.&lt;br&gt;
Security Considerations&lt;br&gt;
A few things to keep in mind for production use:&lt;/p&gt;

&lt;p&gt;Port 18789 is open to 0.0.0.0/0 by default. For production, restrict the CIDR or place the instance behind a VPN, Tailscale, or an ALB with TLS termination.&lt;br&gt;
The gateway token is embedded in user data. For production, store it in AWS Secrets Manager and fetch it during bootstrap.&lt;br&gt;
SSH access is open by default. Set allowed_ssh_cidr to your IP range.&lt;br&gt;
AmazonBedrockFullAccess is broad. For production, create a custom policy limited to bedrock:InvokeModel* and bedrock:ListFoundationModels.&lt;br&gt;
Cost Estimate&lt;br&gt;
Running costs are modest:&lt;/p&gt;

&lt;p&gt;EC2 t3.small: ~$15/month (on-demand), less with Reserved or Spot&lt;br&gt;
Bedrock usage: Pay-per-token based on your Claude model usage (no upfront commitment)&lt;br&gt;
Storage: 20 GB gp3 EBS — ~$1.60/month&lt;br&gt;
Tear Down&lt;br&gt;
When you're done:&lt;/p&gt;

&lt;p&gt;bash&lt;/p&gt;

&lt;p&gt;terraform destroy&lt;br&gt;
Or trigger the Destroy workflow from GitHub Actions → Run workflow (requires manual approval).&lt;/p&gt;

&lt;p&gt;Wrapping Up&lt;br&gt;
This Terraform project eliminates two common friction points in self-hosting AI agents:&lt;/p&gt;

&lt;p&gt;API key management — IAM instance roles handle authentication to Bedrock automatically&lt;br&gt;
Manual server setup — Terraform + user data scripts automate the entire provisioning and configuration&lt;br&gt;
The result is a reproducible, one-command deployment of OpenClaw on AWS with Claude models, complete with CI/CD and proper infrastructure lifecycle management.&lt;/p&gt;

&lt;p&gt;Get started: github.com/Yash-Kavaiya/openclaw-bedrock-ec2&lt;/p&gt;

&lt;p&gt;Tags: AWS, Terraform, Amazon Bedrock, Claude, OpenClaw, Infrastructure as Code, AI Agents, DevOps, GitHub Actions, OIDC&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>opensource</category>
      <category>terraform</category>
    </item>
    <item>
      <title>🔐 SecureChat BOT End-to-End Encrypted Messaging with Advanced Security Features</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Sun, 17 Aug 2025 03:03:44 +0000</pubDate>
      <link>https://dev.to/jhparmar/securechat-botend-to-end-encrypted-messaging-with-advanced-security-features-3hl1</link>
      <guid>https://dev.to/jhparmar/securechat-botend-to-end-encrypted-messaging-with-advanced-security-features-3hl1</guid>
      <description>&lt;p&gt;🔐 &lt;a href="https://www.youware.com/project/2vzp74ia7y?enter_from=share&amp;amp;invite_code=VT0J5IX06B" rel="noopener noreferrer"&gt;SecureChat&lt;br&gt;
&lt;/a&gt;End-to-End Encrypted Messaging with Advanced Security Features &lt;/p&gt;

&lt;p&gt;E2E Encryption&lt;br&gt;
Self-Destruct Messages&lt;br&gt;
Biometric Auth&lt;br&gt;
Group Chats&lt;br&gt;
File Sharing&lt;br&gt;
Real-time Sync&lt;br&gt;
AI Security&lt;br&gt;
VIBECODE&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>vibecoding</category>
      <category>ai</category>
    </item>
    <item>
      <title>LLM assessment.. Trick</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Fri, 15 Nov 2024 11:48:17 +0000</pubDate>
      <link>https://dev.to/jhparmar/llm-assessment-trick-5g60</link>
      <guid>https://dev.to/jhparmar/llm-assessment-trick-5g60</guid>
      <description>&lt;p&gt;Review the LLM prompt and requirements and evaluate the Response1 and Response2 files for your programming language Pick which response satisfies the prompt more accurately and review the code for that response against the Response Requirements Input your final evaluation into Assessment.txt and explain why one response is better than the other Only complete the tasks for 1 programming language / folder of your choosing Prompt provided to the LLM&lt;/p&gt;

&lt;p&gt;"i need code that takes in a number and returns the opposite of that numbe so wirttan backwards"&lt;/p&gt;

&lt;p&gt;Response Requirements&lt;/p&gt;

&lt;p&gt;The response must have code The response must use the correct programming language The response must implement a function The function must receive an integer argument The response must return the integer written in reverse Hints and Tips&lt;/p&gt;

&lt;p&gt;Review each Response and choose which one comes the closest to satisfying the prompt If the prompt asks for code, the response must provide it If a response generates code, you must ensure the code runs without errors If there are errors in the code, use the comment feature to correct the code The response must satisfy but not go beyond the Response Requirements listed above The Response Requirements are more important than code functionality Do not rewrite the function, only fix the errors At the end, choose which Response is better, and justify your answer in the Assessment.txt file Provide details describing why you chose that Response Be mindful of your grammar You may use Grammarly, but you may not use an LLM like ChatGPT&lt;/p&gt;

&lt;p&gt;To complete the task, I'll evaluate Response1 and Response2 based on the prompt and the response requirements. Since I need to choose a programming language and folder, I will assume the language is Python for this example.&lt;/p&gt;

&lt;p&gt;Evaluation of Responses&lt;br&gt;
Response1:&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
def reverse_number(num):&lt;br&gt;
    return str(num)[::-1]&lt;/p&gt;

&lt;p&gt;Response2:&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
def reverse_integer(n):&lt;br&gt;
    return int(str(n)[::-1])&lt;/p&gt;

&lt;p&gt;Analysis&lt;br&gt;
Code Presence: Both responses provide code.&lt;br&gt;
Correct Programming Language: Both responses are written in Python.&lt;br&gt;
Function Implementation: Both responses implement a function.&lt;br&gt;
Integer Argument: Both functions receive an integer argument.&lt;br&gt;
Return Value:&lt;br&gt;
Response1 returns the reversed number as a string.&lt;br&gt;
Response2 returns the reversed number as an integer.&lt;br&gt;
Response Requirements Evaluation&lt;br&gt;
Response1:&lt;br&gt;
Satisfies the requirements: It has a function that takes an integer and returns the reversed representation, but it returns a string instead of an integer.&lt;br&gt;
Response2:&lt;br&gt;
Satisfies the requirements: It has a function that takes an integer and returns the reversed representation as an integer, which aligns perfectly with the prompt.&lt;br&gt;
Conclusion&lt;br&gt;
Response2 is better than Response1 because it meets all the requirements of the prompt accurately. It correctly returns the reversed integer, while Response1 does not fulfill the requirement of returning an integer.&lt;/p&gt;

&lt;p&gt;Final Evaluation for Assessment.txt&lt;/p&gt;

&lt;p&gt;Response2 is the better choice as it accurately meets the prompt requirements. It defines a function that takes an integer argument and returns the integer written in reverse as an integer. Response1, while it does provide a function that reverses the number, returns the result as a string, which does not satisfy the requirement of returning an integer. Therefore, Response2 is the more appropriate and correct response.&lt;/p&gt;

&lt;p&gt;AI: Certainly!&lt;br&gt;
You can create a function that takes in a number and returns its reverse.&lt;br&gt;
Here’s how you can do it:&lt;br&gt;
def reverse_number(num):&lt;/p&gt;

&lt;p&gt;Reverse the number&lt;br&gt;
reverse = num[::-1]&lt;/p&gt;

&lt;p&gt;Return the number&lt;br&gt;
return reverse&lt;/p&gt;

&lt;p&gt;Example usage:&lt;br&gt;
print(reverse_number(1223)) # Output: 3221 print(reverse_number(987654321)) # Output: 123456789&lt;/p&gt;

&lt;p&gt;Let's evaluate the provided code snippet against the prompt and the response requirements.&lt;/p&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
def reverse_number(num):&lt;br&gt;
  # Reverse the number&lt;br&gt;
  reverse = num[::-1]&lt;br&gt;
  # Return the number&lt;br&gt;
  return reverse&lt;/p&gt;

&lt;h2&gt;
  
  
  Example usage:
&lt;/h2&gt;

&lt;p&gt;print(reverse_number(1223)) # Output: 3221&lt;br&gt;
print(reverse_number(987654321)) # Output: 123456789&lt;/p&gt;

&lt;p&gt;Evaluation Against the Prompt and Requirements&lt;br&gt;
Code Presence: The snippet provides code.&lt;br&gt;
Correct Programming Language: The code is written in Python.&lt;br&gt;
Function Implementation: The function reverse_number is implemented.&lt;br&gt;
Integer Argument: The function is expected to receive an integer argument.&lt;br&gt;
Return Value:&lt;br&gt;
The line reverse = num[::-1] attempts to reverse the number. However, this line will cause an error because num is treated as a string (using slicing [::-1]), which is not valid for integers.&lt;br&gt;
The function does not return an integer but rather a string representation of the reversed number.&lt;br&gt;
Issues Identified&lt;br&gt;
The code will raise a TypeError because you cannot slice an integer.&lt;br&gt;
The expected output, as indicated in the comments, is incorrect since the function will not work as intended.&lt;br&gt;
Correction&lt;br&gt;
To fix the code while keeping the original intention, we can convert the integer to a string, reverse it, and then convert it back to an integer. Here’s the corrected version:&lt;/p&gt;

&lt;p&gt;python&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
def reverse_number(num):&lt;br&gt;
    # Reverse the number by converting to string, reversing, and converting back to int&lt;br&gt;
    reverse = int(str(num)[::-1])&lt;br&gt;
    # Return the reversed number&lt;br&gt;
    return reverse&lt;/p&gt;

&lt;h2&gt;
  
  
  Example usage:
&lt;/h2&gt;

&lt;p&gt;print(reverse_number(1223)) # Output: 3221&lt;br&gt;
print(reverse_number(987654321)) # Output: 123456789&lt;br&gt;
Final Evaluation for Assessment.txt&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
The provided code snippet attempts to implement a function that reverses a number. However, it contains a critical error where it tries to slice an integer, which is not valid in Python. The function needs to convert the integer to a string before reversing it and then convert it back to an integer for the final output. The corrected version successfully meets the prompt requirements by returning the reversed number as an integer. Therefore, while the initial response shows the right intention, it requires correction to function correctly.&lt;/p&gt;

&lt;p&gt;Review the LLM prompt and requirements and evaluate the Response1 and Response2 files for your programming language Pick which response satisfies the prompt more accurately and review the code for that response against the Response Requirements Input your final evaluation into Assessment.txt and explain why one response is better than the other Only complete the tasks for 1 programming language / folder of your choosing Prompt provided to the LLM&lt;/p&gt;

&lt;p&gt;"i need code that takes in a number and returns the opposite of that numbe so wirttan backwards"&lt;/p&gt;

&lt;p&gt;Response Requirements&lt;/p&gt;

&lt;p&gt;The response must have code The response must use the correct programming language The response must implement a function The function must receive an integer argument The response must return the integer written in reverse Hints and Tips&lt;/p&gt;

&lt;p&gt;Review each Response and choose which one comes the closest to satisfying the prompt If the prompt asks for code, the response must provide it If a response generates code, you must ensure the code runs without errors If there are errors in the code, use the comment feature to correct the code The response must satisfy but not go beyond the Response Requirements listed above The Response Requirements are more important than code functionality Do not rewrite the function, only fix the errors At the end, choose which Response is better, and justify your answer in the Assessment.txt file Provide details describing why you chose that Response Be mindful of your grammar You may use Grammarly, but you may not use an LLM like ChatGPT fix the code llm&lt;/p&gt;

&lt;p&gt;Let's evaluate Response1 and Response2 based on the provided prompt and requirements. For this example, I will assume we are working in Python.&lt;/p&gt;

&lt;p&gt;Responses&lt;br&gt;
Response1:&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;Verify&lt;/p&gt;

&lt;p&gt;Open In Editor&lt;br&gt;
Edit&lt;br&gt;
Copy code&lt;br&gt;
def reverse_number(num):&lt;br&gt;
    return str(num)[::-1]&lt;br&gt;
Response2:&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;def reverse_integer(n):&lt;br&gt;
    return int(str(n)[::-1])&lt;br&gt;
Evaluation Against the Prompt and Requirements&lt;br&gt;
Code Presence:&lt;/p&gt;

&lt;p&gt;Both responses provide code.&lt;br&gt;
Correct Programming Language:&lt;/p&gt;

&lt;p&gt;Both responses are written in Python.&lt;br&gt;
Function Implementation:&lt;/p&gt;

&lt;p&gt;Both responses implement a function.&lt;br&gt;
Integer Argument:&lt;/p&gt;

&lt;p&gt;Both functions receive an integer argument.&lt;br&gt;
Return Value:&lt;/p&gt;

&lt;p&gt;Response1: Returns the reversed number as a string (e.g., for input 123, it returns '321').&lt;br&gt;
Response2: Returns the reversed number as an integer (e.g., for input 123, it returns 321).&lt;br&gt;
Detailed Analysis&lt;br&gt;
Response1:&lt;br&gt;
While it correctly reverses the digits, it does not return an integer as specified in the prompt. The requirement was to return the integer written in reverse, which Response1 fails to do.&lt;br&gt;
Response2:&lt;br&gt;
This response correctly reverses the digits and converts the result back to an integer, which meets the prompt's requirement of returning the integer written in reverse.&lt;br&gt;
Conclusion&lt;br&gt;
Response2 is the better choice as it fully satisfies the prompt requirements. It defines a function that takes an integer argument and returns the reversed integer, while Response1 fails to return an integer, which is a critical requirement of the prompt.&lt;/p&gt;

&lt;p&gt;Final Evaluation for Assessment.txt&lt;/p&gt;

&lt;p&gt;Response2 is the better choice as it accurately meets the prompt requirements. It defines a function that takes an integer argument and returns the integer written in reverse as an integer. Response1, while it does provide a function that reverses the number, returns the result as a string, which does not satisfy the requirement of returning an integer. Therefore, Response2 is the more appropriate and correct response.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Computer Vision USE realtime Waterlevel maping.</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Mon, 11 Nov 2024 10:13:16 +0000</pubDate>
      <link>https://dev.to/jhparmar/computer-vision-use-realtime-waterlevel-maping-235i</link>
      <guid>https://dev.to/jhparmar/computer-vision-use-realtime-waterlevel-maping-235i</guid>
      <description>&lt;p&gt;Python code for real-time water level mapping. Here's a comprehensive approach that incorporates the best aspects of previous responses and addresses potential issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import Necessary Libraries:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;import cv2&lt;br&gt;
import numpy as np&lt;br&gt;
import time&lt;br&gt;
import matplotlib.pyplot as plt&lt;/p&gt;

&lt;h1&gt;
  
  
  For real-time video processing
&lt;/h1&gt;

&lt;p&gt;import imutils&lt;/p&gt;

&lt;h1&gt;
  
  
  For machine learning models
&lt;/h1&gt;

&lt;p&gt;from tensorflow.keras.models import load_model&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load the Pre-trained Model:&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Replace 'your_model.h5' with the actual path to your trained model
&lt;/h1&gt;

&lt;p&gt;model = load_model('your_model.h5')&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define Functions:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;def preprocess_image(image):&lt;br&gt;
    # Resize the image to a fixed size (adjust as needed)&lt;br&gt;
    image = cv2.resize(image, (224, 224))&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Convert to grayscale if necessary
# image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

# Normalize pixel values to the range [0, 1]
image = image / 255.0

# Add an extra dimension if required for your model
image = np.expand_dims(image, axis=0)

return image
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def predict_water_level(image, model):&lt;br&gt;
    # Preprocess the image&lt;br&gt;
    preprocessed_image = preprocess_image(image)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Make a prediction using the model
prediction = model.predict(preprocessed_image)

# Convert the prediction to a water level value (adjust as needed)
water_level = prediction[0][0] * 100  # Assuming prediction is in the range [0, 1]

return water_level
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Initialize Video Capture:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cap = cv2.VideoCapture(0)  # Replace 0 with the index of your camera&lt;/p&gt;

&lt;h1&gt;
  
  
  Check if the camera opened successfully
&lt;/h1&gt;

&lt;p&gt;if not cap.isOpened():&lt;br&gt;
    print("Error opening video stream")&lt;br&gt;
    exit()&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Process Video Frames in Real Time:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;while True:&lt;br&gt;
    # Read a frame from the video stream&lt;br&gt;
    ret, frame = cap.read()&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check if the frame was read successfully&lt;br&gt;
if not ret:&lt;br&gt;
    break
&lt;h1&gt;
  
  
  Process the frame (e.g., apply image enhancement, remove noise)
&lt;/h1&gt;
&lt;h1&gt;
  
  
  frame = cv2.GaussianBlur(frame, (5, 5), 0)  # Example: Apply Gaussian blur
&lt;/h1&gt;
&lt;h1&gt;
  
  
  Predict the water level
&lt;/h1&gt;

&lt;p&gt;water_level = predict_water_level(frame, model)&lt;/p&gt;
&lt;h1&gt;
  
  
  Display the frame and water level prediction
&lt;/h1&gt;

&lt;p&gt;cv2.putText(frame, f"Water Level: {water_level:.2f}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)&lt;br&gt;
cv2.imshow('Water Level Mapping', frame)&lt;/p&gt;
&lt;h1&gt;
  
  
  Handle user input (e.g., 'q' to quit)
&lt;/h1&gt;

&lt;p&gt;if cv2.waitKey(1) == ord('q'):&lt;br&gt;
    break&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Release the video capture and close windows&lt;br&gt;
&lt;/h1&gt;

&lt;p&gt;cap.release()&lt;br&gt;
cv2.destroyAllWindows()&lt;br&gt;
Key Improvements:&lt;/p&gt;

&lt;p&gt;Modular Approach: The code is well-structured with separate functions for preprocessing, prediction, and display, making it easier to modify and extend.&lt;br&gt;
Error Handling: The code checks for errors in opening the video stream and reading frames, ensuring robustness.&lt;br&gt;
Flexibility: The code allows for customization of preprocessing steps, model architecture, and prediction interpretation based on your specific requirements.&lt;br&gt;
Comments: Clear comments explain the purpose of different code sections, enhancing readability and maintainability.&lt;br&gt;
Remember to replace 'your_model.h5' with the actual path to your trained model and adjust the prediction conversion and display logic as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/echang505/New-Jeans-Labeler" rel="noopener noreferrer"&gt;https://github.com/echang505/New-Jeans-Labeler&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Fine-tuning an object detection model typically involves adapting a pre-trained model</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Mon, 11 Nov 2024 08:33:30 +0000</pubDate>
      <link>https://dev.to/jhparmar/fine-tuning-an-object-detection-model-typically-involves-adapting-a-pre-trained-model-4jp1</link>
      <guid>https://dev.to/jhparmar/fine-tuning-an-object-detection-model-typically-involves-adapting-a-pre-trained-model-4jp1</guid>
      <description>&lt;p&gt;&lt;strong&gt;Fine-tuning an object detection model typically involves adapting a pre-trained model to detect new classes or improve its performance on specific object categories. Here’s an overview of the process:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Choose a Pre-trained Model
Select a pre-trained model architecture for object detection, such as:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;YOLO (You Only Look Once)&lt;br&gt;
Faster R-CNN (Region-based Convolutional Neural Networks)&lt;br&gt;
SSD (Single Shot Multibox Detector)&lt;br&gt;
These models are often pre-trained on large datasets like COCO or PASCAL VOC and can be fine-tuned for custom tasks.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collect and Prepare Data
Data collection: Gather images containing the objects of interest.
Annotation: Use tools like LabelImg to annotate objects with bounding boxes, creating labels in formats like COCO, VOC, or custom formats.
Split data: Divide the dataset into training, validation, and test sets.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Preprocess Data&lt;br&gt;
Ensure the images are resized or normalized to match the input shape expected by the pre-trained model. Some models require specific input sizes, like 416x416 for YOLO.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Up the Environment&lt;br&gt;
Install libraries like TensorFlow, PyTorch, or Detectron2, depending on the model architecture.&lt;br&gt;
If using YOLO, frameworks like Ultralytics YOLOv5 make fine-tuning easier.&lt;br&gt;
bash&lt;br&gt;
Copy code&lt;br&gt;
pip install torch torchvision&lt;br&gt;
pip install yolov5&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fine-tuning the Model&lt;br&gt;
Load the pre-trained model: Load weights from a model trained on a large dataset (e.g., COCO).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;python&lt;br&gt;
Copy code&lt;br&gt;
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)&lt;br&gt;
Modify the model: Replace the last layer to adjust the number of classes.&lt;/p&gt;

&lt;p&gt;python&lt;br&gt;
Copy code&lt;br&gt;
num_classes = 2  # Example: Background + 1 object class&lt;br&gt;
in_features = model.roi_heads.box_predictor.cls_score.in_features&lt;br&gt;
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)&lt;br&gt;
Train the model: Use your custom dataset to fine-tune the model.&lt;/p&gt;

&lt;p&gt;python&lt;br&gt;
Copy code&lt;br&gt;
optimizer = torch.optim.SGD(model.parameters(), lr=0.005, momentum=0.9)&lt;br&gt;
num_epochs = 10&lt;br&gt;
for epoch in range(num_epochs):&lt;br&gt;
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)&lt;br&gt;
    evaluate(model, data_loader_test, device=device)&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Evaluate the Model&lt;br&gt;
Use a separate test dataset to evaluate the model's performance in terms of precision, recall, and mean Average Precision (mAP).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimize for Inference&lt;br&gt;
After fine-tuning, export the model for inference. For example, convert it to ONNX or TensorRT for deployment on edge devices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deploy the Model&lt;br&gt;
Deploy the fine-tuned model in an environment where object detection tasks need to be performed, such as on a web service or mobile app.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>RAG Chatbot with Amazon Bedrock &amp; LangChain: A Powerful Combination Understanding the Components</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Wed, 06 Nov 2024 02:43:35 +0000</pubDate>
      <link>https://dev.to/jhparmar/rag-chatbot-with-amazon-bedrock-langchain-a-powerful-combinationunderstanding-the-components-3gmb</link>
      <guid>https://dev.to/jhparmar/rag-chatbot-with-amazon-bedrock-langchain-a-powerful-combinationunderstanding-the-components-3gmb</guid>
      <description>&lt;p&gt;&lt;a href="https://parmarjatin4911.medium.com/rag-chatbot-with-amazon-bedrock-langchain-eb9435c211c5" rel="noopener noreferrer"&gt;RAG CHATBOT&lt;/a&gt; Before we dive into the specifics of building a RAG chatbot, let's break down the key technologies involved:&lt;/p&gt;

&lt;p&gt;Amazon Bedrock:&lt;br&gt;
A fully managed service that makes it easy to build and scale generative AI applications using foundational models from leading AI providers.&lt;br&gt;
Offers a diverse range of models, including text, code, and image generation models.&lt;br&gt;
LangChain:&lt;br&gt;
A framework for building applications powered by language models.&lt;br&gt;
Provides tools and abstractions for working with large language models, including prompt engineering, retrieval, and evaluation.&lt;br&gt;
Building a RAG Chatbot: A Step-by-Step Guide&lt;/p&gt;

&lt;p&gt;Data Preparation:&lt;/p&gt;

&lt;p&gt;Document Collection: Gather relevant documents, articles, or knowledge bases that will serve as the information source for your chatbot.&lt;br&gt;
Document Processing: Preprocess the documents to clean, tokenize, and embed them into a vector database.&lt;br&gt;
Vector Database: Use a vector database like Pinecone or Weaviate to store the document embeddings. This allows for efficient similarity search.&lt;br&gt;
Model Selection:&lt;/p&gt;

&lt;p&gt;Choose a suitable language model from Amazon Bedrock's offerings. Consider factors like model size, performance, and cost.&lt;br&gt;
For RAG applications, models that excel in understanding and generating text are ideal.&lt;br&gt;
Prompt Engineering:&lt;/p&gt;

&lt;p&gt;Craft effective prompts to guide the language model's responses.&lt;br&gt;
Consider using techniques like few-shot learning and prompt engineering to improve the model's performance.&lt;br&gt;
Retrieval Augmented Generation (RAG) Pipeline:&lt;/p&gt;

&lt;p&gt;Query Embedding: When a user query is received, convert it into a vector representation.&lt;br&gt;
Similarity Search: Use the vector database to retrieve the most relevant documents based on semantic similarity to the query.&lt;br&gt;
Prompt Generation: Construct a prompt that includes the user query and the retrieved documents.&lt;br&gt;
Model Response: Pass the prompt to the language model to generate a response.&lt;br&gt;
Deployment and Integration:&lt;/p&gt;

&lt;p&gt;Serverless Deployment: Utilize AWS Lambda and API Gateway to deploy your chatbot as a serverless application.&lt;br&gt;
Integration with Chat Platforms: Integrate your chatbot with platforms like Slack, Microsoft Teams, or custom web interfaces.&lt;br&gt;
Benefits of Using Amazon Bedrock and LangChain&lt;/p&gt;

&lt;p&gt;Reduced Latency: Serverless architecture ensures fast response times.&lt;br&gt;
Scalability: Easily handle increased user traffic with automatic scaling.&lt;br&gt;
Cost-Efficiency: Pay-per-use pricing model optimizes costs.&lt;br&gt;
Flexibility: Customize your chatbot's behavior and responses through prompt engineering.&lt;br&gt;
Enhanced Accuracy: Leverage the power of large language models and semantic search.&lt;br&gt;
Conclusion&lt;/p&gt;

&lt;p&gt;By combining Amazon Bedrock's powerful language models with LangChain's robust framework, you can create sophisticated RAG chatbots that deliver accurate and informative responses. This approach empowers you to build intelligent conversational AI applications that can revolutionize customer interactions and knowledge access.&lt;/p&gt;

&lt;p&gt;Would you like to delve deeper into any specific aspect of RAG chatbot development or explore a practical example?&lt;/p&gt;

&lt;p&gt;Here are some potential areas we could discuss further:&lt;/p&gt;

&lt;p&gt;Advanced Prompt Engineering Techniques&lt;br&gt;
Fine-tuning Language Models for Specific Domains&lt;br&gt;
Evaluating Chatbot Performance&lt;br&gt;
Ethical Considerations in AI Chatbots&lt;br&gt;
Please feel free to ask any questions you may have.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Google Earth Engine &amp; QGIS: Querying and Exporting Sentinel-2 Images</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Fri, 23 Aug 2024 08:28:04 +0000</pubDate>
      <link>https://dev.to/jhparmar/google-earth-engine-qgis-querying-and-exporting-sentinel-2-images-5dm8</link>
      <guid>https://dev.to/jhparmar/google-earth-engine-qgis-querying-and-exporting-sentinel-2-images-5dm8</guid>
      <description>&lt;p&gt;In this post, I will outline how to query and export Sentinel-2 images in Google Earth Engine. In GEE, you can query and merge multiple images for the time period and region of your choice. For example, it is possible to collect multiple images observed over a year of a specific national park, and then merge them into one image by summarizing (e.g., median) only the data with less than 5% cloud cover at the pixel level. Additionally, you can export this to Google Drive, download it, and then use it in geospatial software such as QGIS. Let’s take a look at the process through code.&lt;/p&gt;

&lt;p&gt;First, I will call the ee and geemap libraries and proceed with Earth Engine authentication and initialization.&lt;/p&gt;

&lt;p&gt;import ee&lt;br&gt;
import geemap&lt;/p&gt;

&lt;h1&gt;
  
  
  Authenticate to Earth Engine
&lt;/h1&gt;

&lt;p&gt;ee.Authenticate()&lt;/p&gt;

&lt;h1&gt;
  
  
  Initialize Earth Engine
&lt;/h1&gt;

&lt;p&gt;ee.Initialize(project='my-project')&lt;/p&gt;

&lt;p&gt;The target area will be Palgongsan National Park in South Korea. The boundary of Palgongsan National Park can be called from the WDPA Feature Collection using the WDPA ID.&lt;/p&gt;

&lt;h1&gt;
  
  
  Palgongsan: Protected area with WDPA ID 555571373
&lt;/h1&gt;

&lt;p&gt;wdpa = ee.FeatureCollection("WCMC/WDPA/current/polygons") \&lt;br&gt;
    .filter(ee.Filter.eq('WDPAID', 555571373))&lt;br&gt;
aoi = wdpa.geometry()&lt;/p&gt;

&lt;p&gt;Next, let’s define a Sentinel-2 Image Collection that meets specific criteria. The period is from January 1, 2024, to January 31, 2024, and the area is within the boundary of Palgongsan National Park with a cloud cover of less than 5%.&lt;/p&gt;

&lt;h1&gt;
  
  
  Select Sentinel-2 Image Collection
&lt;/h1&gt;

&lt;p&gt;collection = ee.ImageCollection("COPERNICUS/S2_SR_HARMONIZED") \&lt;br&gt;
    .filterDate('2024-01-01', '2024-01-31') \&lt;br&gt;
    .filterBounds(aoi) \&lt;br&gt;
    .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 5))&lt;/p&gt;

&lt;p&gt;Print the size of the Sentinel-2 Image Collection. A total of 11 images were queried, which are the results of checking all images that overlap with the Palgongsan National Park area while satisfying the date and cloud cover conditions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Print the size of the S2 Image Collection
&lt;/h1&gt;

&lt;p&gt;collection_size = collection.size()&lt;br&gt;
print("The size of Sentinel-2 Image Collection:", collection_size.getInfo())&lt;/p&gt;

&lt;p&gt;The size of the S2 Image Collection: 11&lt;/p&gt;

&lt;p&gt;You can inspect the attribute information of each image in the queried Sentinel-2 Image Collection as follows.&lt;/p&gt;

&lt;h1&gt;
  
  
  Print the attribute information of each image in the S2 Image Collection
&lt;/h1&gt;

&lt;p&gt;image_list = collection.toList(collection_size)&lt;br&gt;
for i in range(collection_size.getInfo()):&lt;br&gt;
    image = ee.Image(image_list.get(i))&lt;br&gt;
    date = image.date().format("YYYY-MM-dd").getInfo()&lt;br&gt;
    cloud_percentage = image.get("CLOUDY_PIXEL_PERCENTAGE").getInfo()&lt;br&gt;
    print(f"Image {i+1}: Date={date}, Cloudy Pixel={cloud_percentage}%")&lt;/p&gt;

&lt;p&gt;Image 1: Date=2024-01-04, Cloudy Pixel=3.074293%&lt;br&gt;
Image 2: Date=2024-01-04, Cloudy Pixel=1.332756%&lt;br&gt;
Image 3: Date=2024-01-07, Cloudy Pixel=4.386907%&lt;br&gt;
Image 4: Date=2024-01-07, Cloudy Pixel=4.488421%&lt;br&gt;
Image 5: Date=2024-01-12, Cloudy Pixel=0.448056%&lt;br&gt;
Image 6: Date=2024-01-12, Cloudy Pixel=1.065743%&lt;br&gt;
Image 7: Date=2024-01-14, Cloudy Pixel=0.721162%&lt;br&gt;
Image 8: Date=2024-01-27, Cloudy Pixel=0.220503%&lt;br&gt;
Image 9: Date=2024-01-27, Cloudy Pixel=2.45147%&lt;br&gt;
Image 10: Date=2024-01-29, Cloudy Pixel=2.292939%&lt;br&gt;
Image 11: Date=2024-01-29, Cloudy Pixel=4.434421%&lt;/p&gt;

&lt;p&gt;The pixel values of each image in the Sentinel-2 Image Collection can be easily summarized. For example, with the following definition, calculating the median image is straightforward. Simple, isn’t it?&lt;/p&gt;

&lt;h1&gt;
  
  
  Calculate the median image
&lt;/h1&gt;

&lt;p&gt;median_image = collection.median()&lt;/p&gt;

&lt;p&gt;Now let’s map the median Sentinel-2 image along with the boundary of Palgongsan National Park. I’ll set the band combination of the median image to natural color (B4, B3, B2).&lt;/p&gt;

&lt;h1&gt;
  
  
  Map object creation
&lt;/h1&gt;

&lt;p&gt;Map = geemap.Map(width="800px", height="500px")&lt;/p&gt;

&lt;h1&gt;
  
  
  Add median image
&lt;/h1&gt;

&lt;p&gt;Map.addLayer(median_image, {'bands': ['B4', 'B3', 'B2'], 'max': 2000}, 'Median Image')&lt;/p&gt;

&lt;h1&gt;
  
  
  Add AOI
&lt;/h1&gt;

&lt;p&gt;Map.addLayer(aoi, {&lt;br&gt;
  'color': '00FF00', # Green&lt;br&gt;
  'width': 2  # Outline thickness&lt;br&gt;
}, 'Palgongsan National Park')&lt;/p&gt;

&lt;p&gt;Map.centerObject(aoi, 12)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3brpq8i471tttn1sf9fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3brpq8i471tttn1sf9fi.png" alt="Image description" width="718" height="462"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s export the median image to Google Drive. You can set the name (description), folder (export), region, scale, maximum pixels (maxPixels), etc., as shown below. Here, description refers to the file name, and maxPixels is set to 1e13, which means 1 trillion pixels, to prevent errors due to exceeding the pixel limit.&lt;/p&gt;

&lt;h1&gt;
  
  
  Export the image to Google Drive
&lt;/h1&gt;

&lt;p&gt;geemap.ee_export_image_to_drive(median_image,&lt;br&gt;
                                description='median_image',&lt;br&gt;
                                folder='export',&lt;br&gt;
                                region=aoi,&lt;br&gt;
                                scale=10,&lt;br&gt;
                                maxPixels=1e13)&lt;/p&gt;

&lt;p&gt;The process of exporting the image to Google Drive may take some time depending on the file size. Once completed, you can find the median_image.tif file in the specified folder as shown below. You can download it and open it in QGIS.&lt;/p&gt;

&lt;p&gt;Open the data in QGIS, setting the project coordinate system to EPSG:32652 and the band combination to natural color (B4, B3, B2). The median_image.tif file is a single multi-band image that contains complete Sentinel-2 band information, allowing for additional analysis through QGIS functionalities.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>TorchGeo: How to Download the NWPU VHR-10 Dataset</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Fri, 23 Aug 2024 08:25:37 +0000</pubDate>
      <link>https://dev.to/jhparmar/torchgeo-how-to-download-the-nwpu-vhr-10-dataset-21m9</link>
      <guid>https://dev.to/jhparmar/torchgeo-how-to-download-the-nwpu-vhr-10-dataset-21m9</guid>
      <description>&lt;p&gt;&lt;a href="https://torchgeo.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;TorchGeo&lt;/a&gt; is a PyTorch domain library similar to torchvision, specialized in geospatial data, offering datasets, samplers, transformations, and pre-trained models dedicated to this field. This guide introduces how to easily download the NWPU VHR-10 dataset from TorchGeo.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://torchgeo.readthedocs.io/en/stable/api/datasets.html#vhr-10" rel="noopener noreferrer"&gt;VHR-10&lt;/a&gt; dataset is a collection of Very High Resolution remote sensing images of 10 classes, provided by Northwestern Polytechnical University (NWPU) in China.&lt;/p&gt;

&lt;p&gt;Comprising a total of 800 VHR optical remote sensing images, 715 of these are color images acquired from Google Earth with spatial resolutions between 0.5 to 2 meters. The remaining 85 are color-infrared (CIR) images pan-sharpened, obtained from the Vaihingen dataset with a spatial resolution of 0.08 meters.&lt;/p&gt;

&lt;p&gt;The dataset is divided into two sets:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Positive image set (650 images): Images containing at least one object of interest.
Negative image set (150 images): Images with no objects of interest.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The positive image set includes objects from the following ten classes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Airplanes (757)
Ships (302)
Storage tanks (655)
Baseball diamonds (390)
Tennis courts (524)
Basketball courts (159)
Ground track fields (163)
Harbors (224)
Bridges (124)
Vehicles (477)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It includes object detection bounding boxes and instance segmentation masks.&lt;/p&gt;

&lt;p&gt;The TorchGeo documentation guides the use of additional libraries for utilizing this dataset as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Since the dataset is stored in RAR files, [rarfile](https://pypi.org/project/rarfile/) is needed for extraction.
[pycocotools ](https://pypi.org/project/pycocotools/)is required to load the annotations.json file for the “positive” image set.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;However, there are additional considerations beyond the libraries mentioned that need to be taken into account. In this context, I will share information obtained from the How to Download the VHR Dataset discussion.&lt;/p&gt;

&lt;p&gt;Basic Code for Downloading the VHR-10 Dataset&lt;/p&gt;

&lt;p&gt;TorchGeo installs only a core set of dependencies with pip install torchgeo to keep the installation relatively lightweight. For a full installation that includes the optional set of dependencies, you can use pip install torchgeo[datasets]. If you opt for the latter, rarfile and pycocotools, which are required for using the VHR-10 dataset, will be installed automatically. For more information, you might want to refer to the &lt;a href="https://github.com/microsoft/torchgeo/discussions/662" rel="noopener noreferrer"&gt;Required vs. optional dependencies&lt;/a&gt; discussion.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install torchgeo: Installs the "Required" dependency set.
pip install torchgeo[datasets]: Installs the full package including "Optional" dependency set.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;%pip install -q -U torchgeo[datasets]&lt;/p&gt;

&lt;p&gt;Import the necessary libraries and check the version of TorchGeo.&lt;/p&gt;

&lt;p&gt;import torchgeo&lt;br&gt;
from torchgeo.datasets import VHR10&lt;br&gt;
import matplotlib.pyplot as plt&lt;/p&gt;

&lt;p&gt;torchgeo.&lt;strong&gt;version&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;'0.5.2'&lt;/p&gt;

&lt;p&gt;The code below downloads the VHR10 dataset to the location data/VHR10/ with the 'positive' split and applies transformations using the given preprocessing function. The role of the preprocess function is to convert each image into a floating-point type and normalize the values between 0 and 1 by dividing by 255. This process helps in preparing the data in a form that is suitable for model training.&lt;/p&gt;

&lt;p&gt;def preprocess(sample):&lt;br&gt;
    sample["image"] = sample["image"].float() / 255.0&lt;br&gt;
    return sample&lt;/p&gt;

&lt;p&gt;ds = VHR10(&lt;br&gt;
    root="data/VHR10/",&lt;br&gt;
    split="positive",&lt;br&gt;
    transforms=preprocess,&lt;br&gt;
    download=True,&lt;br&gt;
    checksum=True,&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;Error Type 1: gdown Module Not Installed&lt;/p&gt;

&lt;p&gt;TorchGeo downloads the NWPU VHR-10 dataset.rar file from Google Drive. If the gdown module is not installed on your system, you may encounter the ModuleNotFoundError: No module named 'gdown' error. Therefore, execute the pip install gdown command to install the gdown module additionally.&lt;/p&gt;

&lt;p&gt;%pip install -q -U gdown&lt;/p&gt;

&lt;p&gt;Error Type 2: unrar Not Installed&lt;/p&gt;

&lt;p&gt;The rarfile package requires unrar to decompress files in the rar format. If unrar is not installed on your system, you may encounter the RarCannotExec: Cannot find working tool or RarCannotExec: Unrar not installed error. In such cases, unrar needs to be installed directly. I resolved the issue by installing the Complete package file on Windows and added 'C:\Program Files (x86)\GnuWin32\bin' to the PATH in the environment variables.&lt;br&gt;
Error Type 3: Google Drive File URL Retrieval Error&lt;/p&gt;

&lt;p&gt;While installing the gdown module is unnecessary on Google Colab, you may encounter another error: FileURLRetrievalError: Cannot retrieve the public link of the file. You may need to change the permission to 'Anyone with the link', or have had many accesses.. The issue of downloading datasets from Google Drive has been a topic of discussion for a long time. In Colab, you can bypass this error by first downloading the NWPU VHR-10 dataset.rar file using the modified code below, then running the original code again. This approach allows you to use the VHR-10 dataset in Colab without issues.&lt;/p&gt;

&lt;p&gt;import os, gdown&lt;/p&gt;

&lt;p&gt;os.makedirs('data/VHR10/', exist_ok=True)&lt;/p&gt;

&lt;p&gt;url = '&lt;a href="https://drive.google.com/uc?id=1--foZ3dV5OCsqXQXT84UeKtrAqc5CkAE" rel="noopener noreferrer"&gt;https://drive.google.com/uc?id=1--foZ3dV5OCsqXQXT84UeKtrAqc5CkAE&lt;/a&gt;'&lt;br&gt;
output_path = 'data/VHR10/NWPU VHR-10 dataset.rar'&lt;br&gt;
gdown.download(url, output_path, quiet=False)&lt;/p&gt;

&lt;p&gt;Exploring the VHR-10 Dataset&lt;/p&gt;

&lt;p&gt;Let’s now take a look at the positive image set. There are a total of 650 images.&lt;/p&gt;

&lt;p&gt;print(f"VHR-10 dataset: {len(ds)}")&lt;/p&gt;

&lt;p&gt;VHR-10 dataset: 650&lt;/p&gt;

&lt;p&gt;To access the first item in the ds dataset and check the dimensions of the image corresponding to the "image" key, you can use the shape attribute. The shape attribute returns the dimensions of the image in the form of (number of channels, height, width).&lt;/p&gt;

&lt;p&gt;ds[0]["image"].shape&lt;/p&gt;

&lt;p&gt;torch.Size([3, 808, 958])&lt;/p&gt;

&lt;p&gt;To open and display the sixth item’s image from the dataset, you’ll need to change its dimensions from (number of channels, height, width) to (height, width, number of channels).&lt;/p&gt;

&lt;p&gt;image = ds[5]["image"].permute(1, 2, 0)&lt;br&gt;
plt.imshow(image)&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lk064op4bjsxrpvei1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lk064op4bjsxrpvei1e.png" alt="Image description" width="698" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The VHR10 dataset in TorchGeo features a plot method that visualizes annotations included in the annotations.json. These annotations provide information for object detection and instance segmentation, including visual elements like bounding boxes and masks. This allows for direct visualization and verification of images marked with ground truth.&lt;/p&gt;

&lt;p&gt;You can experience an example of object detection training in TorchGeo through torchgeo_object_detection_example.ipynb provided by Caleb Robinson from Microsoft AI for Earth.&lt;/p&gt;

&lt;p&gt;ds.plot(ds[5])&lt;br&gt;
plt.savefig('ground_truth.png', bbox_inches='tight')&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenj1bg6maghzp1h7k6w5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fenj1bg6maghzp1h7k6w5.png" alt="Image description" width="698" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/osgeokr/torchgeo-explained/blob/main/TorchGeo_How_to_Download_the_NWPU_VHR-10_Dataset.ipynb" rel="noopener noreferrer"&gt;Github code&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Species Distribution Modeling Using Google Earth Engine</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Fri, 23 Aug 2024 08:19:55 +0000</pubDate>
      <link>https://dev.to/jhparmar/species-distribution-modeling-using-google-earth-engine-ani</link>
      <guid>https://dev.to/jhparmar/species-distribution-modeling-using-google-earth-engine-ani</guid>
      <description>&lt;p&gt;I am excited to share the news that I have become a contributor to the &lt;a href="https://developers.google.com/earth-engine/tutorials" rel="noopener noreferrer"&gt;Google Earth Engine Tutorials&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I have added a tutorial on Species Distribution Modeling. You can directly access it via the &lt;a href="https://developers.google.com/earth-engine/tutorials/community/species-distribution-modeling?source=post_page-----fe6b3fed8967--------------------------------" rel="noopener noreferrer"&gt;URL&lt;/a&gt; below:&lt;/p&gt;

&lt;p&gt;The tutorial outlines the entire process of implementing species distribution modeling in Google Earth Engine using the Presence-Absence Method. I hope it will be helpful to those interested, and I welcome suggestions for tutorials on other topics.&lt;/p&gt;

&lt;p&gt;I would like to express my sincere gratitude to Justin Braaten from the Google Earth Engine project, who supported me throughout the creation of this tutorial, and Dr. Ramiro Daniel Crego from University College Cork in Ireland, who presented the research potential of species distribution modeling based on Google Earth Engine.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>TorchGeo: An Introduction to Object Detection Example</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Fri, 23 Aug 2024 08:16:30 +0000</pubDate>
      <link>https://dev.to/jhparmar/torchgeo-an-introduction-to-object-detection-example-dc2</link>
      <guid>https://dev.to/jhparmar/torchgeo-an-introduction-to-object-detection-example-dc2</guid>
      <description>&lt;p&gt;TorchGeo is a PyTorch domain library similar to torchvision, specialized for geospatial data. It offers datasets, samplers, transformations, and pre-trained models tailored for geospatial information. This tutorial will introduce an object detection example in TorchGeo.&lt;/p&gt;

&lt;p&gt;This example is based on the notebook&lt;a href="https://gist.github.com/calebrob6/d32673997661a75d1a5e2e5c0afbe68a" rel="noopener noreferrer"&gt;Code demo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;provided by Caleb Robinson from Microsoft AI for Good, with added explanations.&lt;br&gt;
GPU Selection&lt;/p&gt;

&lt;p&gt;Before starting the training, select Runtime &amp;gt; Change runtime type from the notebook’s top menu, then choose GPU from the Hardware accelerator menu and save. I subscribe to Colab Pro and used an A100 GPU.&lt;br&gt;
Installing TorchGeo&lt;/p&gt;

&lt;p&gt;TorchGeo installs only a set of essential dependencies by default with pip install torchgeo to keep the installation relatively lightweight. For a full installation that includes an optional set of dependencies, you can use pip install torchgeo[datasets].&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install torchgeo: Installs the "Required" set of dependencies.
pip install torchgeo[datasets]: Full installation that includes the "Optional" set of dependencies.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;%pip install -q -U torchgeo[datasets]&lt;/p&gt;

&lt;p&gt;Installing PyTorch Lightning&lt;/p&gt;

&lt;p&gt;PyTorch Lightning offers a high-level interface for PyTorch, simplifying and streamlining the model training process.&lt;/p&gt;

&lt;p&gt;!pip install -q -U pytorch-lightning&lt;/p&gt;

&lt;p&gt;Downloading the VHR-10 Dataset&lt;/p&gt;

&lt;p&gt;The VHR-10 dataset, provided by Northwestern Polytechnical University (NWPU) in China, is a Very High Resolution (VHR) remote sensing image dataset encompassing 10 classes.&lt;/p&gt;

&lt;p&gt;Comprising a total of 800 VHR optical remote sensing images, 715 of these color images were acquired from Google Earth with spatial resolutions ranging from 0.5 to 2 meters. The remaining 85 pan-sharpened Color InfraRed (CIR) images, derived from the Vaihingen dataset, boast a spatial resolution of 0.08 meters.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Note: Pan-sharpening is a technique that combines the high-resolution detail of the panchromatic band with the lower resolution color information of other bands.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The dataset is divided into two sets:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Positive image set (650 images): Images containing at least one object.
Negative image set (150 images): Images without any objects.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The positive image set includes objects from the following ten classes:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Airplanes (757 instances)
Ships (302 instances)
Storage tanks (655 instances)
Baseball diamonds (390 instances)
Tennis courts (524 instances)
Basketball courts (159 instances)
Ground track fields (163 instances)
Harbors (224 instances)
Bridges (124 instances)
Vehicles (477 instances)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The dataset includes object detection bounding boxes and instance segmentation masks.&lt;/p&gt;

&lt;p&gt;When using this dataset for research, please cite the following papers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://doi.org/10.1016/j.isprsjprs.2014.10.002" rel="noopener noreferrer"&gt;1&lt;/a&gt;&lt;a href="https://doi.org/10.1109/IGARSS.2019.8898573" rel="noopener noreferrer"&gt;2&lt;/a&gt;[3]&lt;br&gt;
    &lt;a href="https://doi.org/10.3390/rs12060989" rel="noopener noreferrer"&gt;https://doi.org/10.3390/rs12060989&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Import the necessary libraries and download the VHR-10 dataset.&lt;/p&gt;

&lt;p&gt;import torchgeo&lt;br&gt;
from torchgeo.datasets import VHR10&lt;br&gt;
from torchgeo.trainers import ObjectDetectionTask&lt;/p&gt;

&lt;p&gt;import torch&lt;br&gt;
from torch.utils.data import DataLoader&lt;br&gt;
import lightning.pytorch as pl&lt;/p&gt;

&lt;p&gt;import matplotlib.pyplot as plt&lt;/p&gt;

&lt;p&gt;import os, gdown&lt;/p&gt;

&lt;p&gt;os.makedirs('data/VHR10/', exist_ok=True)&lt;/p&gt;

&lt;p&gt;url = '&lt;a href="https://drive.google.com/uc?id=1--foZ3dV5OCsqXQXT84UeKtrAqc5CkAE" rel="noopener noreferrer"&gt;https://drive.google.com/uc?id=1--foZ3dV5OCsqXQXT84UeKtrAqc5CkAE&lt;/a&gt;'&lt;br&gt;
output_path = 'data/VHR10/NWPU VHR-10 dataset.rar'&lt;br&gt;
gdown.download(url, output_path, quiet=False)&lt;/p&gt;

&lt;p&gt;The code below downloads the VHR10 dataset to the data/VHR10/ location with the 'positive' split, and applies transformations using a given preprocessing function. The role of the preprocess function is to convert each image to a floating-point type and normalize the values to be between 0 and 1 by dividing by 255. This process helps prepare the data in a form suitable for model training.&lt;/p&gt;

&lt;p&gt;def preprocess(sample):&lt;br&gt;
    sample["image"] = sample["image"].float() / 255.0&lt;br&gt;
    return sample&lt;/p&gt;

&lt;p&gt;ds = VHR10(&lt;br&gt;
    root="data/VHR10/",&lt;br&gt;
    split="positive",&lt;br&gt;
    transforms=preprocess,&lt;br&gt;
    download=True,&lt;br&gt;
    checksum=True,&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;Exploring the VHR-10 Dataset&lt;/p&gt;

&lt;p&gt;Let’s now take a look at the positive image set. There are a total of 650 images.&lt;/p&gt;

&lt;p&gt;print(f"VHR-10 dataset: {len(ds)}")&lt;/p&gt;

&lt;p&gt;Access the first item in the ds dataset and check the dimensions of the image corresponding to the "image" key. The shape attribute returns the dimensions of the image, which are represented in the format (number of channels, height, width).&lt;/p&gt;

&lt;p&gt;ds[0]["image"].shape&lt;/p&gt;

&lt;p&gt;torch.Size([3, 808, 958])&lt;/p&gt;

&lt;p&gt;Let’s open the image of the sixth item at random. Change the dimensions of the image from (number of channels, height, width) to (height, width, number of channels) before displaying it.&lt;/p&gt;

&lt;p&gt;image = ds[5]["image"].permute(1, 2, 0)&lt;br&gt;
plt.imshow(image)&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5671rcmaw1r68gjlsfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5671rcmaw1r68gjlsfj.png" alt="Image description" width="541" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The VHR10 dataset in TorchGeo includes a plot method that visualizes annotations contained in annotations.json. These annotations provide information for object detection and instance segmentation, including visual elements like bounding boxes and masks. This allows for direct visualization of images with ground truth annotations marked on them.&lt;/p&gt;

&lt;p&gt;ds.plot(ds[5])&lt;br&gt;
plt.savefig('ground_truth.png', bbox_inches='tight')&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;Model Training&lt;/p&gt;

&lt;p&gt;The following demonstrates the process of using PyTorch’s DataLoader to define a collate_fn function and utilize it to load data:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The collate_fn function extracts the image, boxes, labels, and masks from each item in a given batch and constructs a new batch dictionary. This newly constructed batch is then directly used for model training or evaluation. The DataLoader prepares the data in batch units for the model using this function. The option shuffle=True randomly shuffles the order of the dataset to reduce dependency on the sequence of data during model training.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def collate_fn(batch):&lt;br&gt;
    new_batch = {&lt;br&gt;
        "image": [item["image"] for item in batch],  # Images&lt;br&gt;
        "boxes": [item["boxes"] for item in batch],  # Bounding boxes&lt;br&gt;
        "labels": [item["labels"] for item in batch],  # Labels&lt;br&gt;
        "masks": [item["masks"] for item in batch],  # Masks&lt;br&gt;
    }&lt;br&gt;
    return new_batch  # Return the new batch&lt;/p&gt;

&lt;h1&gt;
  
  
  Data Loader
&lt;/h1&gt;

&lt;p&gt;dl = DataLoader(&lt;br&gt;
    ds,  # Dataset&lt;br&gt;
    batch_size=32,  # Number of data to load at one time&lt;br&gt;
    num_workers=2,  # Number of processes to use for data loading&lt;br&gt;
    shuffle=True,  # Whether to shuffle the dataset before loading&lt;br&gt;
    collate_fn=collate_fn,  # collate_fn function for batch processing&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;This code defines a training class for object detection tasks and creates an instance of it. Importantly, the class is designed to handle variable-sized inputs.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The VariableSizeInputObjectDetectionTask class inherits from a standard ObjectDetectionTask and defines the training_step method to process variable-sized input images within each batch. This allows the model to effectively learn from input images of various sizes. The created instance is prepared to perform object detection tasks using the Faster R-CNN model with specified settings.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;class VariableSizeInputObjectDetectionTask(ObjectDetectionTask):&lt;br&gt;
    # Define the training step&lt;br&gt;
    def training_step(self, batch, batch_idx, dataloader_idx=0):&lt;br&gt;
        x = batch["image"]  # Image&lt;br&gt;
        batch_size = len(x)  # Set batch size (number of images)&lt;br&gt;
        y = [&lt;br&gt;
            {"boxes": batch["boxes"][i], "labels": batch["labels"][i]}&lt;br&gt;
            for i in range(batch_size)&lt;br&gt;
        ] # Extract bounding box and label information for each image&lt;br&gt;
        loss_dict = self(x, y)  # Loss&lt;br&gt;
        train_loss: Tensor = sum(loss_dict.values())  # Training loss (sum of loss values)&lt;br&gt;
        self.log_dict(loss_dict)  # Record loss values&lt;br&gt;
        return train_loss  # Return training loss&lt;/p&gt;

&lt;p&gt;task = VariableSizeInputObjectDetectionTask(&lt;br&gt;
    model="faster-rcnn",  # Faster R-CNN model&lt;br&gt;
    backbone="resnet18",  # ResNet18 neural network architecture&lt;br&gt;
    weights=True,  # Use pretrained weights&lt;br&gt;
    in_channels=3,  # Number of channels in the input image (RGB images)&lt;br&gt;
    num_classes=11,  # Number of classes to classify (10 + background)&lt;br&gt;
    trainable_layers=3,  # Number of trainable layers&lt;br&gt;
    lr=1e-3,  # Learning rate&lt;br&gt;
    patience=10,  # Set the number of patience iterations for early stopping&lt;br&gt;
    freeze_backbone=False,  # Whether to train with the backbone network weights unfrozen&lt;br&gt;
)&lt;br&gt;
task.monitor = "loss_classifier"  # Set the metric to monitor (here, the classifier's loss)&lt;/p&gt;

&lt;p&gt;Prepare the configuration for model training using the PyTorch Lightning library. The settings below specify training the model with GPU, saving training logs and checkpoints to the ‘logs/’ directory, and setting the training to run for a minimum of 6 epochs and a maximum of 100 epochs.&lt;/p&gt;

&lt;p&gt;trainer = pl.Trainer(&lt;br&gt;
    default_root_dir="logs/",  # Set the default directory&lt;br&gt;
    accelerator="gpu",  # Set the type of hardware accelerator for training (using GPU)&lt;br&gt;
    devices=[0],  # List of device IDs to use ([0] means the first GPU)&lt;br&gt;
    min_epochs=6,  # Set the minimum number of training epochs&lt;br&gt;
    max_epochs=100,  # Set the maximum number of training epochs&lt;br&gt;
    log_every_n_steps=20,  # Set how often to log after a number of steps&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;%%time&lt;/p&gt;

&lt;h1&gt;
  
  
  Model training
&lt;/h1&gt;

&lt;p&gt;trainer.fit(task, train_dataloaders=dl)&lt;/p&gt;

&lt;p&gt;Model Inference Example&lt;/p&gt;

&lt;p&gt;Retrieve the next batch from the data loader (dl).&lt;/p&gt;

&lt;p&gt;batch = next(iter(dl))&lt;/p&gt;

&lt;p&gt;Obtain the model from the task (task) and set it to evaluation mode. This action deactivates specific layers, such as Dropout, that are used during training. Dropout is a highly effective regularization technique designed to prevent overfitting in neural networks.&lt;/p&gt;

&lt;p&gt;Using torch.no_grad() disables gradient calculations, reducing memory usage and increasing computation speed. This is utilized when the model is not being updated during evaluation or inference phases. Now, pass the image batch through the model to obtain the prediction results.&lt;/p&gt;

&lt;p&gt;model = task.model&lt;br&gt;
model.eval()&lt;/p&gt;

&lt;p&gt;with torch.no_grad():&lt;br&gt;
  out = model(batch["image"])&lt;/p&gt;

&lt;p&gt;Define a sample for a specific batch index.&lt;/p&gt;

&lt;p&gt;def create_sample(batch, out, batch_idx):&lt;br&gt;
    return {&lt;br&gt;
        "image": batch["image"][batch_idx],  # Image&lt;br&gt;
        "boxes": batch["boxes"][batch_idx],  # Actual bounding boxes&lt;br&gt;
        "labels": batch["labels"][batch_idx],  # Actual labels&lt;br&gt;
        "masks": batch["masks"][batch_idx],  # Actual masks&lt;br&gt;
        "prediction_labels": out[batch_idx]["labels"],  # Labels predicted by the model&lt;br&gt;
        "prediction_boxes": out[batch_idx]["boxes"],  # Bounding boxes predicted by the model&lt;br&gt;
        "prediction_scores": out[batch_idx]["scores"],  # Confidence scores for each prediction&lt;br&gt;
    }&lt;/p&gt;

&lt;p&gt;batch_idx = 0&lt;br&gt;
sample = create_sample(batch, out, batch_idx)&lt;/p&gt;

&lt;p&gt;Now, visualize the given sample. The plot method visualizes the image included in the sample, along with the actual labels and bounding boxes, and the predicted labels and bounding boxes.&lt;/p&gt;

&lt;p&gt;ds.plot(sample)&lt;br&gt;
plt.savefig('inference.png', bbox_inches='tight')&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzfnt0hz7fycf4blrj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntzfnt0hz7fycf4blrj5.png" alt="Image description" width="698" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Visualizing Sample for Batch Index 3
&lt;/h1&gt;

&lt;p&gt;batch_idx = 3&lt;br&gt;
sample = create_sample(batch, out, batch_idx)&lt;/p&gt;

&lt;p&gt;ds.plot(sample)&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8ypliy3jc1lc4g5sndq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8ypliy3jc1lc4g5sndq.png" alt="Image description" width="698" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F823r1bokcr0btecnglm2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F823r1bokcr0btecnglm2.png" alt="Image description" width="698" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Visualizing Sample for Batch Index 5
&lt;/h1&gt;

&lt;p&gt;batch_idx = 5&lt;br&gt;
sample = create_sample(batch, out, batch_idx)&lt;/p&gt;

&lt;p&gt;ds.plot(sample)&lt;br&gt;
plt.show()&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1ldrw86z19smnow3bzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1ldrw86z19smnow3bzc.png" alt="Image description" width="698" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Isro free aiml courses</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Wed, 07 Aug 2024 18:21:03 +0000</pubDate>
      <link>https://dev.to/jhparmar/isro-free-aiml-courses-2daa</link>
      <guid>https://dev.to/jhparmar/isro-free-aiml-courses-2daa</guid>
      <description>&lt;p&gt;&lt;a href="https://elearning.iirs.gov.in/edusatregistration/" rel="noopener noreferrer"&gt;https://elearning.iirs.gov.in/edusatregistration/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AI INSPIRATIONAL QUOTE</title>
      <dc:creator>parmarjatin4911@gmail.com</dc:creator>
      <pubDate>Wed, 07 Aug 2024 18:11:25 +0000</pubDate>
      <link>https://dev.to/jhparmar/ai-inspirational-quote-73m</link>
      <guid>https://dev.to/jhparmar/ai-inspirational-quote-73m</guid>
      <description>&lt;p&gt;Belief Creates the Fact&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbizablxbpltbo1b0cx73.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbizablxbpltbo1b0cx73.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
