<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kush Dhuvad</title>
    <description>The latest articles on DEV Community by Kush Dhuvad (@kush_dhuvad_c8d4f344c66c7).</description>
    <link>https://dev.to/kush_dhuvad_c8d4f344c66c7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kush_dhuvad_c8d4f344c66c7"/>
    <language>en</language>
    <item>
      <title>How to make your own AI chatbot for absolute beginners?</title>
      <dc:creator>Kush Dhuvad</dc:creator>
      <pubDate>Tue, 24 Feb 2026 19:02:44 +0000</pubDate>
      <link>https://dev.to/kush_dhuvad_c8d4f344c66c7/how-to-make-your-own-ai-chatbot-for-absolute-beginners-n8p</link>
      <guid>https://dev.to/kush_dhuvad_c8d4f344c66c7/how-to-make-your-own-ai-chatbot-for-absolute-beginners-n8p</guid>
      <description>&lt;p&gt;Github Link - &lt;a href="https://github.com/Kush999/AI-chatbot.git" rel="noopener noreferrer"&gt;AI Chatbot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever thought of integrating an AI chatbot for your internal systems, where AI has the knowledge of all of your documents, and you can use it to get answers from your own documents instead of getting generic answers? It all starts by making your own AI chatbot using Langchain. &lt;/p&gt;

&lt;p&gt;In this article, we'll understand how to make your own AI chatbot that gives generic answers from the internet based on what it's trained on. Later, in the next article, we will learn how to use our own documents as a reference for our AI chatbot. &lt;/p&gt;

&lt;p&gt;In this article, I'll show you how to use LangChain and an OpenAI API key to run your own AI chatbot on Streamlit using Python. &lt;/p&gt;

&lt;p&gt;First things first, we need to understand the requirements for this project. Create a new folder and create a new file in it called app.py and requirements.txt. Add the following libraries to requirements.txt and run the command below to install all the requirements for the project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;langchain-openai&amp;gt;=0.1.0

langchain-core&amp;gt;=0.1.0

openai&amp;gt;=1.0.0

streamlit&amp;gt;=1.30.0

python-dotenv&amp;gt;=1.0.0

langchain-community&amp;gt;=0.1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we get started with making the chatbot, we need to understand what Langchain is and how we are going to use it to make our own chatbot. &lt;/p&gt;

&lt;h2&gt;
  
  
  Langchain
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp28vrktma80ke1w23jnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp28vrktma80ke1w23jnj.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Think of this as the "LEGO" set for AI. It helps us connect different pieces (like prompts and models) together into a "chain."&lt;/p&gt;

&lt;p&gt;The core concept of LangChain is the Chain. Imagine a literal chain with different links. Each link performs a specific task.&lt;/p&gt;

&lt;p&gt;In your code, the chain looks like this:&lt;/p&gt;

&lt;p&gt;chain = prompt | llm | output_parser&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Link 1 (Prompt):&lt;/strong&gt; Takes your raw text ("What is the moon?") and wraps it in instructions ("You are a helpful assistant...").&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Link 2 (LLM):&lt;/strong&gt; Takes that wrapped package and sends it to the AI brain (OpenAI) to get an answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Link 3 (Output Parser):&lt;/strong&gt; Takes the raw, messy data the AI sends back and turns it into clean text for your website.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why do we need LangChain?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without LangChain, you would have to write dozens of lines of manual code to:&lt;/p&gt;

&lt;p&gt;Connect to the API.&lt;br&gt;
Format the JSON data correctly.&lt;br&gt;
Check for errors if the AI fails.&lt;br&gt;
Remember what was said three sentences ago (Memory).&lt;/p&gt;

&lt;p&gt;LangChain reduces all that work into a single line. It allows you to swap out parts easily. If you decide you don't like the "Gemma" model and want to use "GPT-4," you only have to change one word in your code; the rest of the "chain" stays exactly the same.&lt;/p&gt;

&lt;p&gt;We will use Langchain here to monitor the API responses from the AI model. For that, go to the website given below and create a new project and generate an API key. &lt;/p&gt;

&lt;p&gt;Langchain - &lt;a href="https://smith.langchain.com/" rel="noopener noreferrer"&gt;LangSmith setup&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you set up your account, you will see something like this as a response when you search for anything on the AI chatbot that you made. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw12b2zxtak589ao7pzt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw12b2zxtak589ao7pzt5.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Let's understand the code now.
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Understanding required libraries&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_openai import ChatOpenAI #Importing the brain
from langchain_core.prompts import ChatPromptTemplate #Importing brain prompting
from langchain_core.output_parsers import StrOutputParser #Importing output parser
import streamlit as st # Importing Streamlit for UI 
import os 
from dotenv import load_dotenv #Importing keys 

load_dotenv()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Importing the essential keys&lt;/strong&gt;&lt;br&gt;
Make a separate folder called.env and set up your keys over there. We never use API keys in the main code to avoid exposure to secret API keys.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;OPENAI_API_KEY=

LANGCHAIN_API_KEY=

LANGCHAIN_PROJECT="Test Project"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to generate an OpenAI API key?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to the website given below, sign up, and you might have to add $5 to your account to get your API key. Once you have the $5 in the account, you can generate a new API key from the website. You can also use Llama models to run your local AI model for free, but then it requires a high-spec, local system, so it might take a long time to process on lower-end systems. &lt;/p&gt;

&lt;p&gt;OpenAI API Key - &lt;a href="https://openai.com/api/" rel="noopener noreferrer"&gt;OpenAI API key&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we'll import the API keys in our code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")  #Importing OpenAI API key 

#Langsmith Tracing

os.environ["LANGCHAIN_TRACING_V2"] = "true" #Enabling tracing to monitor the API response.
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY") #Importing Langchain API key for tracing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Setting up the AI prompt&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here you can decide what kind of AI you want. You can ask it to be a doctor or an engineer, to tailor your responses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt=ChatPromptTemplate.from_messages([

    ("system","You are a helpful assistant that helps answer questions about the world."),

    ("human","Question:{question}")

])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Setting up Streamlit UI for the web&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;st.title("Chatbot with Langchain and Streamlit")
input_text=st.text_input("Ask a question about the world:")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5. Calling the OpenAI model and setting up the chain&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;llm=ChatOpenAI(model="gpt-3.5-turbo") #We are using the OpenAI model GPT-3.5 Turbo since it is cost-effective 

output_parser=StrOutputParser() #we are parsing the output to view on the web. 

chain=prompt | llm | output_parser #Setting up the chain so that the prompt goes to the LLM and then we get the output response
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;6. Invoking the response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the user presses Enter, we invoke the chain so that the user question goes to the LLM and outputs a response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if input_text:

    st.write(chain.invoke({"question":input_text}))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This eventually gives you a final AI chatbot where you can ask it any questions, and it will give you a response using the GPT-3.5 Turbo AI model as shown below &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftow7no9bxmon0ge60lta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftow7no9bxmon0ge60lta.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the command below in your terminal to run the AI chatbot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamlit run app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try changing the System Prompt from 'helpful assistant' to 'grumpy pirate' and see how the personality shifts!&lt;/p&gt;

&lt;p&gt;If you have any questions, I'm attaching the GitHub link to the final code. Feel free to refer to it for any questions. &lt;/p&gt;

&lt;p&gt;Github Link - &lt;a href="https://github.com/Kush999/AI-chatbot.git" rel="noopener noreferrer"&gt;AI Chatbot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Buy Me a Coffee - &lt;a href="https://buymeacoffee.com/kushdhuvad" rel="noopener noreferrer"&gt;Buy Me a Coffee&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The $500 Million Security Gap: Bank of Ireland UK’s Critical Failure</title>
      <dc:creator>Kush Dhuvad</dc:creator>
      <pubDate>Fri, 20 Feb 2026 21:20:52 +0000</pubDate>
      <link>https://dev.to/kush_dhuvad_c8d4f344c66c7/the-500-million-security-gap-bank-of-ireland-uks-critical-failure-1kkj</link>
      <guid>https://dev.to/kush_dhuvad_c8d4f344c66c7/the-500-million-security-gap-bank-of-ireland-uks-critical-failure-1kkj</guid>
      <description>&lt;h3&gt;
  
  
  The Institutional Failure of "Confirmation of Payee"
&lt;/h3&gt;

&lt;p&gt;On February 20, 2026, Bank of Ireland UK (BOIUK) issued a formal apology for its failure to implement "Confirmation of Payee" (CoP) send requests. While seemingly a back-office technicality, this delay represents a critical vulnerability in the UK’s banking infrastructure. CoP is designed to cross-reference account names with account numbers to prevent Authorised Push Payment (APP) fraud—a category of crime that saw UK consumers lose &lt;strong&gt;£459.7 million&lt;/strong&gt; in the most recent annual reporting cycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  The $500 Million Security Gap
&lt;/h3&gt;

&lt;p&gt;The "So What?" of the BOIUK delay is simple: friction saves money. The UK Payment Systems Regulator (PSR) originally mandated CoP for Group 1 banks by 2020, yet mid-tier institutions continue to struggle with the rollout. For BOIUK customers, the lack of a CoP feature means they are significantly more exposed to "malicious redirection" scams. According to UK Finance data, &lt;strong&gt;77% of APP fraud cases&lt;/strong&gt; originate on social media, but the final point of failure is always the bank transfer. By missing the implementation window, BOIUK is effectively leaving the door unlocked in a neighborhood where 1 in 4 adults has been targeted by a financial scam.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Regulatory 6-3 Split in Priority
&lt;/h3&gt;

&lt;p&gt;The delay highlights a growing divide between institutional capability and regulatory demands. The PSR has the power to fine banks up to &lt;strong&gt;10% of their annual turnover&lt;/strong&gt; for systemic failures in payment security. BOIUK’s apology is a preemptive strike against potential litigation, but it does little to address the competitive disadvantage. While Tier 1 banks like Barclays and HSBC have maintained CoP functionality for over five years, BOIUK’s delay places them in a high-risk bracket for "mule" account activity, which cost the UK banking sector an estimated &lt;strong&gt;£1.2 billion&lt;/strong&gt; in total fraud losses last year.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Friction-Security Paradox
&lt;/h3&gt;

&lt;p&gt;The banking sector is currently caught in a paradox: customers demand instant transactions, but security requires intentional friction. The BOIUK failure suggests that the technical debt within mid-sized legacy systems is higher than anticipated. When a bank fails to verify a recipient's identity, the liability often shifts. Under new PSR rules, banks are generally required to reimburse victims of APP fraud up to a &lt;strong&gt;£415,000 cap&lt;/strong&gt; per claim, unless "gross negligence" is proven. By failing to provide CoP, BOIUK isn't just failing its customers; it is increasing its own balance sheet liability in a market where fraud costs are rising at a &lt;strong&gt;5% compound annual growth rate&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bottom Line
&lt;/h3&gt;

&lt;p&gt;Bank of Ireland UK’s apology is the sound of a legacy institution hitting a technical wall. In a financial ecosystem where &lt;strong&gt;92% of UK adults&lt;/strong&gt; use mobile banking, the inability to verify a payee in real-time is no longer an "oversight"—it is a structural liability. As the PSR moves toward even stricter reimbursement mandates, the cost of being "sorry" will soon be outweighed by the cost of the fines.&lt;/p&gt;

</description>
      <category>fintech</category>
      <category>banking</category>
      <category>cybersecurity</category>
      <category>fraud</category>
    </item>
  </channel>
</rss>
