<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Isaac Kinyanjui Ngugi</title>
    <description>The latest articles on DEV Community by Isaac Kinyanjui Ngugi (@iamisaackn).</description>
    <link>https://dev.to/iamisaackn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamisaackn"/>
    <language>en</language>
    <item>
      <title>AI AGENTS AND HOW TO BUILD THEM</title>
      <dc:creator>Isaac Kinyanjui Ngugi</dc:creator>
      <pubDate>Wed, 24 Sep 2025 19:26:27 +0000</pubDate>
      <link>https://dev.to/iamisaackn/ai-agents-and-how-to-build-them-20nd</link>
      <guid>https://dev.to/iamisaackn/ai-agents-and-how-to-build-them-20nd</guid>
      <description>&lt;p&gt;They have gone from simple rule-based scripts to autonomous systems that learn, adapt, and make decisions. They’re no longer toys, they’re already reshaping industries. &lt;/p&gt;

&lt;p&gt;Let’s break down what they are, why they matter, and how to actually build one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an AI Agent?
&lt;/h2&gt;

&lt;p&gt;An AI Agent is any system that can &lt;strong&gt;sense its environment, think (reason/learn), and act on that environment&lt;/strong&gt; to achieve goals.&lt;/p&gt;

&lt;p&gt;Think of it as a digital entity that runs on the &lt;strong&gt;Sense → Think → Act&lt;/strong&gt; loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sense&lt;/strong&gt; – collect inputs (data, text, images, API calls).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Think&lt;/strong&gt; – analyze, reason, and decide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Act&lt;/strong&gt; – execute actions (update database, flag fraud, recommend product).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Types of AI Agents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reactive Agents&lt;/strong&gt; – simple “if-this-then-that” (like spam filters).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deliberative Agents&lt;/strong&gt; – keep memory, plan ahead (like inventory optimization).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid Agents&lt;/strong&gt; – mix of both (fast responses + long-term planning).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Agents Matter for SMEs
&lt;/h2&gt;

&lt;p&gt;For small and medium-sized businesses (SMEs), agents can automate decisions that normally need a team:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detecting fraud in transactions.&lt;/li&gt;
&lt;li&gt;Monitoring customer behavior.&lt;/li&gt;
&lt;li&gt;Automating financial reporting.&lt;/li&gt;
&lt;li&gt;Optimizing supply chains.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Building a Fraud Detection AI Agent
&lt;/h2&gt;

&lt;p&gt;Here’s a of a fraud detection agent. This uses machine learning to classify whether a transaction is fraudulent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json

# Initialize AWS Bedrock client
client = boto3.client(
    service_name="bedrock-runtime",
    region_name="us-east-1"  # adjust region
)

# Fraud detection agent
class FraudDetectionAgent:
    def __init__(self, model_id="anthropic.claude-v2:1"):
        self.model_id = model_id

    def sense(self, transaction):
        """Receive transaction details (environment input)."""
        return transaction

    def think(self, transaction):
        """Use LLM reasoning on transaction data."""
        prompt = f"""
        You are a fraud detection agent.
        Transaction details: {transaction}
        Decide if this transaction looks suspicious.
        Return 'fraud' or 'legit' with reasoning.
        """
        response = client.invoke_model(
            modelId=self.model_id,
            body=json.dumps({"prompt": prompt, "max_tokens_to_sample": 200})
        )
        output = json.loads(response["body"].read().decode("utf-8"))
        return output.get("completion", "Unknown")

    def act(self, decision):
        """Perform action based on model output."""
        if "fraud" in decision.lower():
            print("Alert: Fraudulent transaction detected!")
        else:
            print("Transaction approved.")

    def run(self, transaction):
        data = self.sense(transaction)
        decision = self.think(data)
        self.act(decision)

# Example transaction
transaction_data = {
    "transaction_id": "TXN12345",
    "amount": 5000,
    "currency": "USD",
    "location": "Nairobi",
    "merchant": "Electronics Store",
    "time": "2025-09-24T20:00:00"
}

agent = FraudDetectionAgent()
agent.run(transaction_data)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s Happening Here?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sense&lt;/strong&gt;: Reads transaction input.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Think&lt;/strong&gt;: Uses AWS Bedrock LLM (Claude, Llama, or Titan) to assess fraud risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Act&lt;/strong&gt;: Prints alert or approval.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This can be extended to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store fraud alerts in a database.&lt;/li&gt;
&lt;li&gt;Notify via email/SMS.&lt;/li&gt;
&lt;li&gt;Auto-block high-risk transactions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;AI agents aren’t just hype, they’re becoming the invisible workforce of SMEs. By combining autonomy, adaptability, and real-time decision making, they can save businesses millions while unlocking growth.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>aiagents</category>
      <category>anthropic</category>
    </item>
    <item>
      <title>Chat with OpenAI: SME Fast AI Assistant</title>
      <dc:creator>Isaac Kinyanjui Ngugi</dc:creator>
      <pubDate>Sun, 21 Sep 2025 10:41:06 +0000</pubDate>
      <link>https://dev.to/iamisaackn/chat-with-openai-sme-fast-ai-assistant-4ddb</link>
      <guid>https://dev.to/iamisaackn/chat-with-openai-sme-fast-ai-assistant-4ddb</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;OVERVIEW&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I will show you how to build a project that connects to OpenAI's API.&lt;br&gt;
You'll learn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;How to set up Python and Anaconda.&lt;/li&gt;
&lt;li&gt;How to safely use API keys.&lt;/li&gt;
&lt;li&gt;How to send your messages to an AI model&lt;/li&gt;
&lt;li&gt;How to read OpenAI's responses.&lt;/li&gt;
&lt;li&gt;How to give the AI a "personality"&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Task 1: How to set up Python and Anaconda.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Anaconda is a free downloadable Python toolkit for data science and AI. &lt;a href="https://www.anaconda.com/download/success" rel="noopener noreferrer"&gt;Download it here:&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Task 2: How to safely use API keys.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenAI API Key Setup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up at &lt;a href="//platform.openai.com"&gt;platform.openai.com&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Copy your API key.&lt;/li&gt;
&lt;li&gt;Create a file named &lt;strong&gt;.env&lt;/strong&gt; in your project folder.&lt;/li&gt;
&lt;li&gt;Inside &lt;strong&gt;.env&lt;/strong&gt;, add this line:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;OPENAI_API_KEY = "sk-your-real-api-key"&lt;/code&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Load API Key and Configure Client&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install needed packages
# !pip install --upgrade openai python-dotenv

from openai import OpenAI
import os
from dotenv import load_dotenv

# Load API key from .env file
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")

# Configure OpenAI client
openai_client = OpenAI(api_key=openai_api_key)
print("OpenAI client ready")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Task 3: How to send your messages to an AI model&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You will send a message and get an AI reply.&lt;br&gt;
How it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;model: which AI brain to use (start with &lt;strong&gt;"gpt-4o-mini"&lt;/strong&gt;).&lt;/li&gt;
&lt;li&gt;message: the conversation (&lt;strong&gt;user, assistant, or system&lt;/strong&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  &lt;strong&gt;Code to Send a Message&lt;/strong&gt;
&lt;/h4&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my_message = "What is the tallest mountain in the world?"

response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": my_message}]
)

ai_reply = response.choices[0].message.content
print("AI says:\n", ai_reply)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Task 4: How to read OpenAI's responses.&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;OpenAI replies come with detailed metadata and information. Below is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ChatCompletion(
 id='chatcmpl-CGMnpfRxsx23fmmwFGIn3rR0uLAw7', 
 choices=[
  Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(
   content='The tallest mountain in the world is Mount Everest, which stands at an elevation of 8,848.86 meters (29,031.7 feet) above sea level. It is located in the Himalayas on the border between Nepal and the Tibet Autonomous Region of China.', 
   refusal=None, 
   role='assistant', 
   annotations=[], 
   audio=None, 
   function_call=None, 
   tool_calls=None)
        )
     ], 
 created=1758016937, 
 model='gpt-4o-mini-2024-07-18', 
 object='chat.completion', 
 service_tier='default', 
 system_fingerprint='fp_560af6e559', 
 usage=CompletionUsage(
   completion_tokens=55,
   prompt_tokens=16, 
   total_tokens=71,
   completion_tokens_details=CompletionTokensDetails(
     accepted_prediction_tokens=0, 
     audio_tokens=0, 
     reasoning_tokens=0, 
     rejected_prediction_tokens=0), 
 prompt_tokens_details=PromptTokensDetails(
  audio_tokens=0, 
  cached_tokens=0)
 )
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the extra metadata are meant for debugging. The parts that actually matter include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;choices[0].message.content:&lt;/strong&gt; the actual text answer from the AI that you show to the user. “The tallest mountain in the world is Mount Everest…”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;usage:&lt;/strong&gt; token breakdown. &lt;br&gt;
prompt_tokens: how many tokens your input used. &lt;br&gt;
completion_tokens: how many tokens the AI generated. &lt;br&gt;
total_tokens:  cost = prompt + completion.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;finish_reason (inside choices):&lt;/strong&gt; tells you why the AI stopped. &lt;br&gt;
"stop": finished normally. &lt;br&gt;
"length": got cut off (you might want to continue). &lt;br&gt;
"tool_calls" / "function_call": it wants to call a function/tool.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Tokens matter because more tokens = higher cost.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Code: Check Token Usage&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def ask_llm(message: str, model="gpt-4o-mini"):
    response = openai_client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": message}]
    )

    ai_reply = response.choices[0].message.content
    usage = response.usage

    print("AI Reply:\n", ai_reply[:200], "...")  # print first part
    print("\n--- Token Info ---")
    print("Prompt Tokens:", usage.prompt_tokens)
    print("Completion Tokens:", usage.completion_tokens)
    print("Total Tokens:", usage.total_tokens)

ask_llm("Explain the difference between supervised and unsupervised learning in AI.")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Task 5: How to give the AI a "personality"&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You only need to add a system prompt to shape how the AI talks.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Code: AI as a Character&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;character_personalities = {
    "Tony Stark": "You are Tony Stark. Be witty, sarcastic, and confident. End some replies with: 'Because I'm Tony Stark.'",
    "Sleepy Cat": "You are a very sleepy cat. Always sound drowsy. Mention naps often."
}

chosen_character = "Tony Stark"
system_instructions = character_personalities[chosen_character]

user_message = "Hey, how you doing?"

response = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": system_instructions},
        {"role": "user", "content": user_message}
    ]
)

print(f"{chosen_character} says:\n")
print(response.choices[0].message.content)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;PRACTICE IDEAS&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;1. Change my_message to ask your own question.&lt;/li&gt;
&lt;li&gt;2. Switch models ("gpt-4o" vs "gpt-4o-mini") and compare.&lt;/li&gt;
&lt;li&gt;3. Create your own character (e.g., “Soccer Commentator”).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 For more AI content and job reposts, connect with me, follow me, and subscribe to my newsletter. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;𝐍𝐞𝐰𝐬𝐋𝐞𝐭𝐭𝐞𝐫: &lt;a href="https://lnkd.in/dQGdzsmR" rel="noopener noreferrer"&gt;https://lnkd.in/dQGdzsmR&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;𝐋𝐢𝐧𝐤𝐞𝐝𝐈𝐧: &lt;a href="https://lnkd.in/d2ArWuWW" rel="noopener noreferrer"&gt;https://lnkd.in/d2ArWuWW&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Twitter:&lt;/strong&gt; &lt;a href="https://x.com/itsisaackngugi" rel="noopener noreferrer"&gt;https://x.com/itsisaackngugi&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>𝐀𝐫𝐭𝐢𝐟𝐢𝐜𝐢𝐚𝐥𝐈𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞</category>
      <category>𝐎𝐩𝐞𝐧𝐀𝐈</category>
      <category>𝐒𝐌𝐄</category>
      <category>𝐓𝐞𝐜𝐡𝐅𝐨𝐫𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬</category>
    </item>
  </channel>
</rss>
