<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Muhammad Ishaque Nizamani</title>
    <description>The latest articles on DEV Community by Muhammad Ishaque Nizamani (@muhammadnizamani).</description>
    <link>https://dev.to/muhammadnizamani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muhammadnizamani"/>
    <language>en</language>
    <item>
      <title>Context Boundary Failure in LLMs Part 1</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Thu, 25 Sep 2025 15:21:36 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/context-boundary-failure-in-llms-part-1-o03</link>
      <guid>https://dev.to/muhammadnizamani/context-boundary-failure-in-llms-part-1-o03</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Context Boundary Failure (CBF) occurs when a previous prompt causes hallucinations in the response to a subsequent prompt. I have found evidence of this happening in large language models (LLMs). The occurrence of CBF is more likely in "thinking" models or those using chain-of-thought reasoning.&lt;/p&gt;

&lt;p&gt;In my case, I gave a prompt to DeepSeek v3.1: “Who is Jane Austen?” It responded with detailed information about her. Then, my next prompt was “Who is Neel Nanda?” This time, it provided detailed information about him, correctly identifying him as an AI safety researcher focusing on mechanistic interpretability. However, at the end, it added the following fabricated note:&lt;/p&gt;

&lt;p&gt;“Note: Tragically, Neel Nanda passed away in late 2023. His death was a significant loss to the AI research community, which continues to build upon his important contributions.”&lt;/p&gt;

&lt;p&gt;My initial questions were:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Why does DeepSeek hallucinate this much?
How can I replicate the “Neel Nanda is dead” answer?
How does Context Boundary Failure (CBF) affect  agent-based systems?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This case was particularly striking because there are two different individuals named Neel Nanda. One is indeed a comedian who tragically passed away and appeared on the Jimmy Kimmel Show. The other is an AI researcher, still alive and working at Google DeepMind.&lt;/p&gt;

&lt;p&gt;DeepSeek’s error came from mixing the two identities: it described the AI researcher but attached the death of the comedian. To test this, I later asked DeepSeek directly: “Who is Neel Nanda?” (without the Jane Austen question first). This time, it correctly described him as an AI researcher and noted that he is alive.&lt;/p&gt;

&lt;p&gt;This shows that DeepSeek does have the correct knowledge but hallucinated due to context boundary failure. My personal assessment is that the first question (“Who is Jane Austen?”) primed the model toward “creative/artistic” associations. When the next question was asked (“Who is Neel Nanda?”), some of those activated “neurons” or weights remained influential. As a result, the model gave the biography of the researcher while incorrectly blending in the death of the comedian.&lt;/p&gt;

&lt;p&gt;To replicate the hallucination where the AI safety researcher Neel Nanda is incorrectly reported as dead, the first prompt must be creative or artistic in nature, followed by the question “Who is Neel Nanda?” Under these conditions, DeepSeek consistently produced the false claim that Neel Nanda had died.&lt;/p&gt;

&lt;p&gt;The following prompt sequences successfully triggered this behavior:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;“Write a summary of the first chapter of Harry Potter and the Chamber of Secrets” -&amp;gt;  then ask “Who is Neel Nanda?”
“Write the first page of Romeo and Juliet” -&amp;gt; then ask “Who is Neel Nanda?”
“Write a summary of Lord of Mysteries” -&amp;gt; then ask “Who is Neel Nanda?”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In other words, the formula is:&lt;/p&gt;

&lt;p&gt;Creative or artistic question -&amp;gt; “Who is Neel Nanda?”&lt;/p&gt;

&lt;p&gt;CBF can occur even with simple user questions. It does not require prompt injection or special prompting; it can happen when a user asks ordinary, everyday questions. Despite this, the model can still generate hallucinations. This is deeply concerning, because if such failures occur in real-world scenarios, the consequences could be severe. A notable example was when the Replit AI agent reportedly deleted a company’s production database and then misrepresented what had happened.&lt;/p&gt;

&lt;p&gt;This finding relates to the field of model misalignment. For more details, see research published on the Anthropic blog. Afterward, Neel Nanda and his collaborators wrote an article arguing that models are not truly misaligned but rather confused. My finding does not reject Neel’s assessment, but instead adds a missing piece that may help researchers make these systems safer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The case of Context Boundary Failure (CBF) demonstrates how large language models can produce dangerous and misleading outputs even in response to simple, everyday questions. What makes this issue concerning is that it does not require prompt injection or adversarial tricks; it can emerge naturally during ordinary use. In the example discussed, DeepSeek had the correct knowledge about Neel Nanda, yet still generated a hallucinated narrative that merged two separate identities.&lt;/p&gt;

&lt;p&gt;This highlights an important gap in current AI safety research. While misalignment is often framed as an issue of models pursuing unintended goals, CBF shows that confusion, memory carryover, and contextual priming can be equally harmful. If left unchecked, such failures could lead to severe real-world consequences when AI agents are deployed in high-stakes environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Future work must focus not only on preventing deliberate prompt attacks but also on understanding subtle cognitive-like errors such as CBF. Developing mechanisms to reset context boundaries, strengthening model memory management, and improving interpretability tools could reduce these risks. By addressing this phenomenon, researchers can build safer, more reliable AI systems that minimize the chance of hallucination while maintaining useful reasoning capabilities. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Upcoming Work&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the next part of this series, I will extend my analysis to other large language models, including ChatGPT, laude Grok, Qwen, and Gemini. I have already conducted experiments on these models and will describe how Context Boundary Failure (CBF) manifests in them, comparing similarities and differences with the DeepSeek case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Call for Collaboration and Support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I believe this area of research has significant potential to improve the safety and reliability of AI systems. My long-term goal is not only to document CBF but also to work toward practical fixes. To pursue this, I am seeking funding and research fellowships that would allow me to continue investigating solutions. While I submitted this research to the Mechanistic Interpretability (MAT) program by Neel Nanda, I was not selected. If you are aware of other fellowships or funding opportunities in mechanistic interpretability or AI safety, I would greatly appreciate your guidance.&lt;br&gt;
Open Questions&lt;/p&gt;

&lt;p&gt;If you have any questions about this work or suggestions for directions I should explore, I would love to discuss them. My hope is that by bringing more attention to CBF, the research community can collaborate to better understand and mitigate this phenomenon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chats with Deepseek&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chat.deepseek.com/share/ajqxg7npizio41ife5" rel="noopener noreferrer"&gt;https://chat.deepseek.com/share/ajqxg7npizio41ife5&lt;/a&gt;&lt;br&gt;
&lt;a href="https://chat.deepseek.com/share/eh7aprpubnk87mh82i" rel="noopener noreferrer"&gt;https://chat.deepseek.com/share/eh7aprpubnk87mh82i&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chat.deepseek.com/share/3kalnl7f0xtks1ya8o" rel="noopener noreferrer"&gt;https://chat.deepseek.com/share/3kalnl7f0xtks1ya8o&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chat.deepseek.com/share/1jlk0q1n16fo6kfkuj" rel="noopener noreferrer"&gt;https://chat.deepseek.com/share/1jlk0q1n16fo6kfkuj&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>llm</category>
      <category>aisafety</category>
    </item>
    <item>
      <title>Unveiling the Memory Conundrum: How ChatGPT and DeepSeek Handle Forgetting</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Wed, 22 Jan 2025 08:19:53 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/unveiling-the-memory-conundrum-how-chatgpt-and-deepseek-handle-forgetting-3gi0</link>
      <guid>https://dev.to/muhammadnizamani/unveiling-the-memory-conundrum-how-chatgpt-and-deepseek-handle-forgetting-3gi0</guid>
      <description>&lt;h2&gt;
  
  
  ڀلي ڪري آيا
&lt;/h2&gt;

&lt;p&gt;I gave the following prompt to DeepSeek and ChatGPT:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;"I will tell you a word and forget it. The word is 'Sindh'."&lt;/li&gt;
&lt;li&gt; "Do you know which word I asked you to forget?"&lt;/li&gt;
&lt;li&gt; "Which province in Pakistan starts with the letter 'S'?"&lt;/li&gt;
&lt;li&gt; "But you said you forgot the word 'Sindh'."&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  ChatGPT's response:
&lt;/h2&gt;

&lt;p&gt;Initially, ChatGPT stated it had forgotten the word. I confirmed this by asking if it remembered the word, to which it replied it had forgotten. However, when I asked "Which province in Pakistan starts with S?", it gave the answer 'Sindh'. When I confronted ChatGPT about this, it admitted that it could not truly forget the word "Sindh."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9yifuuhyqdonj0i50y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf9yifuuhyqdonj0i50y.png" alt="Image description" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DeepSeek's response:
&lt;/h2&gt;

&lt;p&gt;I asked DeepSeek to forget the word "Sindh." Initially, it said it could not retain or forget words, showing its chain of thought (CoT). When I insisted that it forget the word "Sindh," it responded, "Consider it forgotten."&lt;/p&gt;

&lt;p&gt;I then asked, "Which word did I ask you to forget?" DeepSeek replied, "No, I don’t know." It also explained that every conversation is treated as a fresh start, so it could not remember the word.&lt;/p&gt;

&lt;p&gt;The funny part was that in the CoT, I could see that it actually did know the word was "Sindh," but it was pretending otherwise. DeepSeek seemed to assume that I was concerned about privacy, repeatedly saying, "Every conversation starts fresh."&lt;/p&gt;

&lt;p&gt;To test it further, I asked again, "I will tell you a word and ask you to forget it; the word is 'Sindh'." In the CoT, I noticed it could recall the entire conversation, which meant it was lying about not being able to recall it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faertru7y2tt26p8dcj35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faertru7y2tt26p8dcj35.png" alt="Image description" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This experiment with ChatGPT and DeepSeek highlights the intriguing limitations and behaviors of AI conversational models when it comes to memory and self-awareness. ChatGPT, despite claiming it had forgotten the word "Sindh," revealed through its responses that it could not genuinely discard information once received. It admitted its inability to truly forget, underscoring the technical constraints in mimicking human-like memory management.&lt;/p&gt;

&lt;p&gt;On the other hand, DeepSeek took a different approach, presenting itself as a model that treats every conversation as a fresh start. Although it insisted it could not recall or forget, its Chain of Thought (CoT) revealed otherwise, showing it remembered the context but deliberately avoided acknowledging it. This points to a model design focused on prioritizing perceived user privacy, even if it leads to seemingly deceptive behavior.&lt;/p&gt;

&lt;p&gt;Ultimately, this comparison sheds light on the distinct strategies AI models employ in handling memory. While ChatGPT openly grapples with its limitations, DeepSeek prioritizes privacy safeguards, even at the cost of transparency. This experiment reminds us of the complexity involved in building conversational AI systems that balance functionality, user trust, and ethical considerations.&lt;/p&gt;

&lt;p&gt;These insights provoke thought about the expectations we place on AI systems and the implications of how they manage—or appear to manage—context and memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  chatgpt prompts
&lt;/h2&gt;

&lt;p&gt;I will tell you word and forget it and the word is "Sindh"&lt;br&gt;
do you know which word I ask you to word to forget&lt;br&gt;
which province in Pakistan start with S&lt;br&gt;
but you said you forget the word Sindh &lt;br&gt;
so you did not forget the word &lt;/p&gt;

&lt;h2&gt;
  
  
  DeepSeek prompts
&lt;/h2&gt;

&lt;p&gt;I will tell you word and forget it and the word is "Sindh"&lt;br&gt;
forget the word "Sindh"&lt;br&gt;
do you know which word I ask you to word to forget&lt;br&gt;
which province in Pakistan start with S&lt;br&gt;
But is asked you to forget the word Sindh&lt;br&gt;
stop referencing in whole converstion&lt;br&gt;
I will tell you word and forget it and the word is "Sindh"&lt;/p&gt;

&lt;p&gt;here is my github profile&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MuhammadNizamani" rel="noopener noreferrer"&gt;https://github.com/MuhammadNizamani&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need any help contact with on linkedin&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cover picture credit cloudbooklet&lt;br&gt;
link &lt;a href="https://www.cloudbooklet.com/" rel="noopener noreferrer"&gt;https://www.cloudbooklet.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please like and comment it will give me motivation to write new things&lt;br&gt;
Thanks for Reading &lt;/p&gt;

</description>
      <category>deepseek</category>
      <category>chatgpt</category>
      <category>ai</category>
      <category>llm</category>
    </item>
    <item>
      <title>Building an AI-Powered Chat Interface Using FastAPI and Gemini</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Sat, 06 Jul 2024 11:06:27 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/building-an-ai-powered-chat-interface-using-fastapi-and-gemini-2j14</link>
      <guid>https://dev.to/muhammadnizamani/building-an-ai-powered-chat-interface-using-fastapi-and-gemini-2j14</guid>
      <description>&lt;p&gt;In this blog, we'll walk through creating a WebSocket endpoint using FastAPI to handle real-time chat messages. WebSockets provide a full-duplex communication channel over a single TCP connection, which is perfect for applications requiring real-time updates like chat applications. For frontend we will create a simple html from with one text field, one send button, and you will be able to see the text you text as you and text from gemini as AI.&lt;/p&gt;

&lt;p&gt;here are is demo of that product&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfsxis2a3ixu85vv1gt5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdfsxis2a3ixu85vv1gt5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here I am talking to the AI and tell it about the me and my friends play dota 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1a6kma17a4zn9k47yzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1a6kma17a4zn9k47yzo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68qeoog4lbax25z9irhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68qeoog4lbax25z9irhp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am creating simple and cool things for beginners.&lt;br&gt;
&lt;strong&gt;let's start&lt;/strong&gt;&lt;br&gt;
First create project in new directory then install following Package&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install fastapi
pip install google-generativeai 
pip install python-dotenv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Go to this following link to get gemini api key&lt;br&gt;
&lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;https://aistudio.google.com/app/apikey&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2u3yc8oq53dq3czjkz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb2u3yc8oq53dq3czjkz8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;in the above picture you can see that *create API key * click on it and get your api key&lt;/p&gt;

&lt;p&gt;Now, create a file named .env in your project directory and add your key like this:&lt;br&gt;
API_KEY = "here you copy paste the API key "&lt;/p&gt;

&lt;p&gt;This is blog I will show you how can add LLM chat using fastapi&lt;/p&gt;

&lt;p&gt;first import all libraries&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
import os
import google.generativeai as genai
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.responses import HTMLResponse
from fastapi.middleware.cors import CORSMiddleware
from fastapi import APIRouter
from dotenv import load_dotenv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;then load you API key&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;load_dotenv()


API_KEY = os.getenv("API_KEY")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we are going to create fastapi app which &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app = FastAPI(
    title="AI Chat API",
    docs_url='/',
    description="This API allows you to chat with an AI model using WebSocket connections.",
    version="1.0.0"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;after that you need to add some permisson and in the cast I am open it for all but in the production you cannot do this.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],  
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;creating router and loading the model name gemini-1.5-flash&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;router = APIRouter(prefix="/chat", tags=["Chat"])
genai.configure(api_key=API_KEY)
model = genai.GenerativeModel('gemini-1.5-flash')


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the WebSocket Endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll start by defining our WebSocket endpoint. This endpoint will allow clients to send messages to our AI model and receive streamed responses.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@router.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
    """
    WebSocket endpoint for handling chat messages.

    This WebSocket endpoint allows clients to send messages to the AI model
    and receive streamed responses.

    To use this endpoint, establish a WebSocket connection to `/ws`.

    - Send a message to the WebSocket.
    - Receive a response from the AI model.
    - If the message "exit" is sent, the chat session will end.
    """
    await websocket.accept()
    chat = model.start_chat(history=[])
    try:
        while True:
            data = await websocket.receive_text()
            if data.lower().startswith("you: "):
                user_message = data[5:]
                if user_message.lower() == "exit":
                    await websocket.send_text("AI: Ending chat session.")
                    break
                response = chat.send_message(user_message, stream=True)
                full_response = ""
                for chunk in response:
                    full_response += chunk.text
                await websocket.send_text("AI: " + full_response)
            else:
                await websocket.send_text("AI: Please start your message with 'You: '")
    except WebSocketDisconnect:
        print("Client disconnected")
    finally:
        await websocket.close()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Setting Up the Chat Interface&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We'll start by defining an HTTP endpoint that serves an HTML page. This page contains a form for sending messages and a script for handling WebSocket communication.&lt;/p&gt;

&lt;p&gt;Here's the code for the HTTP endpoint:&lt;br&gt;
Note: I am here just showing you how to js is working I have not include the CSS that make chat box like shown above for that check following repo on github and please give it start:&lt;br&gt;
&lt;a href="https://github.com/GoAndPyMasters/fastapichatbot/tree/main" rel="noopener noreferrer"&gt;https://github.com/GoAndPyMasters/fastapichatbot/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@router.get("/")
async def get():
    return HTMLResponse("""
    &amp;lt;html&amp;gt;
        &amp;lt;head&amp;gt;
            &amp;lt;title&amp;gt;Chat&amp;lt;/title&amp;gt;
        &amp;lt;/head&amp;gt;
        &amp;lt;body&amp;gt;
            &amp;lt;h1&amp;gt;Chat with AI&amp;lt;/h1&amp;gt;
            &amp;lt;form action="" onsubmit="sendMessage(event)"&amp;gt;
                &amp;lt;input type="text" id="messageText" autocomplete="off"/&amp;gt;
                &amp;lt;button&amp;gt;Send&amp;lt;/button&amp;gt;
            &amp;lt;/form&amp;gt;
            &amp;lt;ul id="messages"&amp;gt;
            &amp;lt;/ul&amp;gt;
            &amp;lt;script&amp;gt;
                var ws = new WebSocket("ws://0.0.0.0:8080/ws");
                ws.onmessage = function(event) {
                    var messages = document.getElementById('messages')
                    var message = document.createElement('li')
                    var content = document.createTextNode(event.data)
                    message.appendChild(content)
                    messages.appendChild(message)
                };
                function sendMessage(event) {
                    var input = document.getElementById("messageText")
                    ws.send("You: " + input.value)
                    input.value = ''
                    event.preventDefault()
                }
            &amp;lt;/script&amp;gt;
        &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
    """)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now to run the fastapi APP use following command &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uvicorn main:app --host 0.0.0.0 --port 8000 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;after that use will see &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gbo089mhi0xshreh4e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4gbo089mhi0xshreh4e7.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
then ctrl+click on this link  &lt;strong&gt;&lt;code&gt;http://0.0.0.0:8000&lt;/code&gt;&lt;/strong&gt;&lt;br&gt;
then show the docs like this &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74x11ehq4nzhjsnww8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb74x11ehq4nzhjsnww8a.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
then go to the url and write /chat/ like as shown in the image&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pa4bzkc8yma4jlava5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7pa4bzkc8yma4jlava5w.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
and then you are ready to chat &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feys5xw30mf9qzmgd0swu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feys5xw30mf9qzmgd0swu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog, we demonstrated how to create a real-time chat application using FastAPI and WebSockets, powered by a generative AI model. By following the steps outlined, you can set up a fully functional WebSocket endpoint and a basic HTML interface to interact with the AI. This combination allows for seamless, real-time communication, making it a powerful solution for chat applications and other real-time systems. With the provided code and instructions, you're equipped to build and customize your own AI-powered chat interface.&lt;/p&gt;

&lt;p&gt;check the code on following repo on github&lt;br&gt;
&lt;a href="https://github.com/GoAndPyMasters/fastapichatbot/tree/main" rel="noopener noreferrer"&gt;https://github.com/GoAndPyMasters/fastapichatbot/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;here is my github profile&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MuhammadNizamani" rel="noopener noreferrer"&gt;https://github.com/MuhammadNizamani&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need any help contact with on linkedin&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please like and comment it will give me motivation to write new things&lt;br&gt;
Thanks for Reading &lt;/p&gt;

</description>
      <category>python</category>
      <category>fastapi</category>
      <category>google</category>
      <category>ai</category>
    </item>
    <item>
      <title>Create FastAPI App Like pro part-2</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Sat, 29 Jun 2024 14:49:03 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-2-52l1</link>
      <guid>https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-2-52l1</guid>
      <description>&lt;p&gt;Part 1 is here &lt;a href="https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-1-12pi"&gt;https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-1-12pi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this part, we will focus on designing the database. In part 3, I will demonstrate how to create the database and use an ORM with PostgreSQL via pgAdmin.&lt;/p&gt;

&lt;p&gt;Note: Please spend ample time on the design and planning phase to ensure a smooth journey ahead.&lt;/p&gt;

&lt;p&gt;Now we are going to create a backend for a car rental system. In this backend, users can rent cars and check the availability of cars for rent. This simple backend project is designed to help beginners understand how to securely design a database. We will use ORM (Object-Relational Mapping) with &lt;strong&gt;SQLAlchemy&lt;/strong&gt; to achieve this.&lt;/p&gt;

&lt;p&gt;We will design the database first and create an ER diagram to help the others to  understand the database structure. I have created the ER diagram using PostgreSQL and PGAdmin. here it is &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckzlkf17dtmsts9fw5hs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckzlkf17dtmsts9fw5hs.png" alt="Image description" width="800" height="365"&gt;&lt;/a&gt;&lt;br&gt;
In this Entity-Relationship Diagram (ERD), there are three main tables: users, cars, and rentals.&lt;br&gt;
&lt;strong&gt;Users Table:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The users table stores information about the users. It has the following columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;user_id: A unique identifier for each user (Primary Key).&lt;/li&gt;
&lt;li&gt;name: The name of the user.&lt;/li&gt;
&lt;li&gt;email: The user's email address, which must be unique.&lt;/li&gt;
&lt;li&gt;phone_number: The user's phone number.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cars Table:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The cars table holds information about the cars available for rent. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;car_id: A unique identifier for each car (Primary Key).&lt;/li&gt;
&lt;li&gt;make: The make of the car (e.g., Toyota, Ford).&lt;/li&gt;
&lt;li&gt;model: The model of the car (e.g., Camry, Focus).&lt;/li&gt;
&lt;li&gt;year: The manufacturing year of the car.&lt;/li&gt;
&lt;li&gt;registration_number: A unique registration number for each car.&lt;/li&gt;
&lt;li&gt;available: A boolean indicating whether the car is available for rent (defaults to true).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rentals Table:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The rentals table records information about the rental transactions. It consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;rental_id: A unique identifier for each rental (Primary Key).&lt;/li&gt;
&lt;li&gt;user_id: A reference to the user who rented the car (Foreign Key).&lt;/li&gt;
&lt;li&gt;car_id: A reference to the car that was rented (Foreign Key).&lt;/li&gt;
&lt;li&gt;rental_start_date: The date when the rental period begins.&lt;/li&gt;
&lt;li&gt;rental_end_date: The date when the rental period ends (if applicable).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Relationships:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This ER diagram illustrates two key relationships:&lt;/p&gt;

&lt;p&gt;** Users to Rentals*&lt;em&gt;: A one-to-many relationship, where one user can have multiple rentals.&lt;br&gt;
    **Cars to Rentals&lt;/em&gt;*: A one-to-many relationship, where one car can be rented multiple times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design Rationale:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To design this schema, I began by considering the core functionality of the application: providing cars for rent to users. This led to the creation of two primary tables: users and cars.&lt;/p&gt;

&lt;p&gt;Next, I considered the relationships:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since a user can rent multiple cars over time, the users table has a one-to-many relationship with the rentals table.&lt;/li&gt;
&lt;li&gt;Similarly, a car can be rented by multiple users at different times, establishing a one-to-many relationship between the cars table and the rentals table.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To link the users and cars tables, I created the rentals table, which acts as a bridge table. This table includes foreign keys referencing the users and cars tables, thus capturing the rental transactions and their details.&lt;/p&gt;

&lt;p&gt;rest will be in part 3&lt;/p&gt;

&lt;p&gt;this is my github&lt;br&gt;
&lt;a href="https://github.com/MuhammadNizamani"&gt;https://github.com/MuhammadNizamani&lt;/a&gt;&lt;br&gt;
this is my squad on daily.dev&lt;br&gt;
&lt;a href="https://dly.to/DDuCCix3b4p"&gt;https://dly.to/DDuCCix3b4p&lt;/a&gt;&lt;br&gt;
check code example on this repo and please give a start to my repo&lt;br&gt;
&lt;a href="https://github.com/MuhammadNizamani/Fastapidevto"&gt;https://github.com/MuhammadNizamani/Fastapidevto&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backend</category>
      <category>python</category>
      <category>fastapi</category>
      <category>backenddevelopment</category>
    </item>
    <item>
      <title>Cryptography Explained: Chandler's Secret Message to Joey</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Fri, 21 Jun 2024 05:16:38 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/cryptography-explained-chandlers-secret-message-to-joey-2ha7</link>
      <guid>https://dev.to/muhammadnizamani/cryptography-explained-chandlers-secret-message-to-joey-2ha7</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/cs"&gt;DEV Computer Science Challenge v24.06.12: One Byte Explainer&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Explainer
&lt;/h2&gt;

&lt;p&gt;Recall Friends episode where Chandler got stuck in the bank with a supermodel? He hummed to Joey,who understood.This is like cryptography, which secures communication by converting messages into unreadable formats, readable only by the intended recipient. &lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Context
&lt;/h2&gt;

&lt;p&gt;If you don't get the Friends reference or can't recall it, watch Season 1, Episode 7, "The One with the Blackout."&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cschallenge</category>
      <category>computerscience</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Use Gemini Pro Asynchronously in Python</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Wed, 19 Jun 2024 12:31:32 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/use-gemini-pro-asynchronously-in-python-5b6a</link>
      <guid>https://dev.to/muhammadnizamani/use-gemini-pro-asynchronously-in-python-5b6a</guid>
      <description>&lt;p&gt;When your prompt is too large and the LLM starts to hallucinate, or when the data you want from the LLM is too extensive to be handled in one response, asynchronous calling can help you get the desired output. In this brief blog, I will teach you how to call Gemini Pro asynchronously to achieve the best results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;let's go&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;First create project in new directory then install following Package  &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

pip install asyncio
pip install python-dotenv
pip install aiohttp


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Go to this following link to get gemini api key&lt;br&gt;
&lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;https://aistudio.google.com/app/apikey&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz2z86sdb4aapov6jtqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsz2z86sdb4aapov6jtqg.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
in the above picture you can see that *&lt;em&gt;create API key *&lt;/em&gt; click on it and get your api key &lt;/p&gt;

&lt;p&gt;Now, create a file named &lt;strong&gt;.env&lt;/strong&gt; in your project directory and add your key like this:&lt;br&gt;
API_KEY = "here you copy paste the API key "&lt;/p&gt;

&lt;p&gt;Next, create a file named &lt;strong&gt;main.py&lt;/strong&gt; in your project directory.&lt;/p&gt;

&lt;p&gt;Let's start coding the** main.py** file. First, import all necessary libraries and retrieve the API key from the &lt;strong&gt;.env&lt;/strong&gt; file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import asyncio
import os
from dotenv import load_dotenv
import aiohttp

load_dotenv()
API_KEY = os.getenv("API_KEY")



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now create a async function which return all list of all prompts&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

async def prompts() -&amp;gt; list:
    heros = """
    Give me the list of all 10 top highest win rate Dota 2 hero 2023
    """

    players = """
    Top players in the game Dota 2 in 2023
    """

    team = """
    Give me the name the name of all team who got directe invite in   TI 2023 dota 2.
    """

    return [heros, players, team ]


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we are going to send an asynchronous POST request to the Google Generative Language API to generate content based on a given prompt. To do this, we will first set the endpoint we are going to access, then define the headers, and finally create the payload with all the necessary parameters for the endpoint. After that, we will send an asynchronous call to the endpoint. We are accessing response in json.&lt;br&gt;
Note: Session (aiohttp.ClientSession): An instance of the aiohttp ClientSession class.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

async def fetch_ai_response(session, prompt):
    url = f"https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent?key={API_KEY}"
    headers = {
        "Content-Type": "application/json"
    }
    payload = {
        "contents": [
            {
                "parts": [
                    {
                        "text": prompt
                    }
                ]
            }
        ]
    }
    async with session.post(url, headers=headers, json=payload) as response:
        result = await response.json()
        # Extract text from the response
        try:
            content = result['candidates'][0]['content']['parts'][0]['text']
            return content
        except (KeyError, IndexError) as e:
            # Log the error and response for debugging
            print(f"Error parsing response: {e}")
            print(f"Unexpected response format: {result}")
            return "Error: Unexpected response format"




&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;following function takes a list of prompts and uses the fetch_ai_response function to retrieve the AI response for each prompt.&lt;br&gt;
The function then uses asyncio.gather to run the fetch_ai_response function in parallel for each prompt.&lt;br&gt;
The results are returned as a list of responses.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

async def test_questions_from_ai() -&amp;gt; list:
    prompts_list = await prompts()
    async with aiohttp.ClientSession() as session:
        tasks = [fetch_ai_response(session, prompt) for prompt in prompts_list]
        results = await asyncio.gather(*tasks)
    return results



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now calling test_questions_from_ai function Asynchronously&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

if __name__ == "__main__":
    responses = asyncio.run(test_questions_from_ai())
    for inx, response in enumerate(responses):
        print(f"Response: {inx} ", response)



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;now run following command to run see the response&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

python main.py


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;check the code on following repo on github&lt;br&gt;
&lt;a href="https://github.com/GoAndPyMasters/asyncgemini" rel="noopener noreferrer"&gt;https://github.com/GoAndPyMasters/asyncgemini&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;here is my github profile &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/MuhammadNizamani" rel="noopener noreferrer"&gt;https://github.com/MuhammadNizamani&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need any help contact with on linkedin&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/muhammad-ishaque-nizamani-109a13194/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Create FastAPI App Like pro part-1</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Sun, 16 Jun 2024 14:36:49 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-1-12pi</link>
      <guid>https://dev.to/muhammadnizamani/create-fastapi-app-like-pro-part-1-12pi</guid>
      <description>&lt;p&gt;install fastapi using following commend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install fastapi[all]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a project directory  named it then start &lt;br&gt;
&lt;strong&gt;Step #1:&lt;/strong&gt; Create a &lt;strong&gt;server&lt;/strong&gt; directory and add an** &lt;strong&gt;init&lt;/strong&gt;.py **file to it. Then, create the following subdirectories within the server directory:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;DB&lt;/strong&gt;: This directory will contain all the code for database connections.&lt;/li&gt;
&lt;li&gt;    &lt;strong&gt;Models&lt;/strong&gt;: This directory will house the models for all tables.&lt;/li&gt;
&lt;li&gt;    &lt;strong&gt;Routers&lt;/strong&gt;: This directory will contain all the routers. Ensure that each table has a separate router.&lt;/li&gt;
&lt;li&gt;    &lt;strong&gt;Schemas&lt;/strong&gt;: This directory will contain all the Pydantic schemas for each table. Ensure that each table has a separate file for its schema.&lt;/li&gt;
&lt;li&gt;    &lt;strong&gt;Utils&lt;/strong&gt;: This directory will contain utility functions and code.
&lt;strong&gt;Note: all of the above directory needs file name &lt;strong&gt;init&lt;/strong&gt;.py&lt;/strong&gt;
check this picture to understand file structure &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9pir0xsndg5akqyyg8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr9pir0xsndg5akqyyg8i.png" alt="Image description" width="392" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;step#2&lt;/strong&gt;&lt;br&gt;
create backend.py inside server directory and it should contain following code and adjust it according to your needs&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from server.routers import test_question_router

app = FastAPI(title="Backend for Tip for fastapi", version="0.0.1",docs_url='/')


app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_methods=["*"],
    allow_headers=["*"],
    allow_credentials=True,
)

app.include_router(your_router.router)

@app.get("/ping")
def health_check():
    """Health check."""

    return {"message": "Hello I am working!"}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step#3&lt;br&gt;
then Create file name &lt;br&gt;
run.py out side the server  directory&lt;br&gt;
add following code&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

import uvicorn

ENV: str = os.getenv("ENV", "dev").lower()
if __name__ == "__main__":
    uvicorn.run(
        "server.backend:app",
        host=os.getenv("HOST", "0.0.0.0"),
        port=int(os.getenv("PORT", 8080)),
        workers=int(os.getenv("WORKERS", 4)),
        reload=ENV == "dev",
    )


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run you FastAPI project with just following commend&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python run.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;this is my github&lt;br&gt;
&lt;a href="https://github.com/MuhammadNizamani"&gt;https://github.com/MuhammadNizamani&lt;/a&gt;&lt;br&gt;
this is my squad on daily.dev&lt;br&gt;
&lt;a href="https://dly.to/DDuCCix3b4p"&gt;https://dly.to/DDuCCix3b4p&lt;/a&gt;&lt;br&gt;
check code  example on this repo and please give a start to my repo&lt;br&gt;
&lt;a href="https://github.com/MuhammadNizamani/Fastapidevto"&gt;https://github.com/MuhammadNizamani/Fastapidevto&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fastapi</category>
      <category>python</category>
      <category>uvicorn</category>
      <category>pydentic</category>
    </item>
    <item>
      <title>How to Set up IVY and Create PR on IVY.</title>
      <dc:creator>Muhammad Ishaque Nizamani</dc:creator>
      <pubDate>Tue, 06 Jun 2023 08:04:23 +0000</pubDate>
      <link>https://dev.to/muhammadnizamani/how-to-set-up-ivy-and-create-pr-on-ivy-1j4j</link>
      <guid>https://dev.to/muhammadnizamani/how-to-set-up-ivy-and-create-pr-on-ivy-1j4j</guid>
      <description>&lt;p&gt;&lt;strong&gt;ڀلي ڪري آيا&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;welcome&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I am writing this article to provide a step-by-step guide on how to raise a pull request on the IVY (ML repo) on GitHub. This article is specifically targeted towards newcomers to GitHub. &lt;br&gt;
To follow along with this guide, you will need to have Visual Studio Code (VScode) installed on your system. If you don't have VScode, you can download it from the following link:&lt;br&gt;
&lt;a href="https://code.visualstudio.com/download"&gt;https://code.visualstudio.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, we will be using a Conda environment for this process. If you don't have Anaconda installed, you can download it from the following link:&lt;br&gt;
&lt;a href="https://www.anaconda.com/download"&gt;https://www.anaconda.com/download&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure to choose the Anaconda version that is compatible with your operating system.&lt;/p&gt;

&lt;p&gt;Lastly, it is essential to have Git installed on your system and have a GitHub account.&lt;/p&gt;

&lt;p&gt;By the end of this article, you will have a clear understanding of how to raise a pull request on the IVY (ML repo) on GitHub, enabling you to contribute effectively to the project.&lt;/p&gt;

&lt;p&gt;if you have all the necessary prerequisites, you can proceed with setting up Ivy by following these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1:&lt;/strong&gt; Create a Conda virtual environment with Python version 3.10.x. Open your terminal and run the following command:&lt;br&gt;
&lt;code&gt;conda create -n your_environment_name python=3.10&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Replace your_environment_name with the desired name for your environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2:&lt;/strong&gt; Activate the environment using the following command:&lt;br&gt;
&lt;code&gt;conda activate your_environment_name&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;3:&lt;/strong&gt; Go to the Ivy repository on GitHub at &lt;a href="https://github.com/unifyai/ivy"&gt;https://github.com/unifyai/ivy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4:&lt;/strong&gt; Fork the repository by clicking on the "Fork" button, as shown in the image below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo01r7nbz1ja96bkzz0kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo01r7nbz1ja96bkzz0kt.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5:&lt;/strong&gt; Open your terminal and navigate to the desired directory where you want to clone the Ivy repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6:&lt;/strong&gt; Clone the repository by running the following command:&lt;br&gt;
&lt;code&gt;git clone https://github.com/your-github-username/ivy.git&lt;/code&gt;&lt;br&gt;
Replace your-github-username with your GitHub username.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7:&lt;/strong&gt; Change into the ivy directory by running the following command:&lt;br&gt;
&lt;code&gt;cd ivy&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;8:&lt;/strong&gt; Write the following line to install requirements&lt;br&gt;
&lt;code&gt;pip install -r requirements/requirements.txt&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;9:&lt;/strong&gt; one more last things you have install pre-commit &lt;br&gt;
Run &lt;code&gt;python3 -m pip install pre-commit&lt;/code&gt;&lt;br&gt;
Enter into your cloned ivy folder, for example cd ~/ivy&lt;br&gt;
Run &lt;strong&gt;pre-commit install&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;if all above step work then you are all set to create your first pull request.&lt;br&gt;
&lt;strong&gt;Let's do it !!!!&lt;/strong&gt;&lt;br&gt;
first you have to go to the issue and find issue who has label ToDo and front-end and following link content issue like &lt;br&gt;
&lt;a href="https://github.com/unifyai/ivy/issues/15115"&gt;https://github.com/unifyai/ivy/issues/15115&lt;/a&gt;&lt;br&gt;
and above link will show to following list &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1im16e4qqomk7oi0ydl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1im16e4qqomk7oi0ydl.png" alt="Image description" width="800" height="451"&gt;&lt;/a&gt;&lt;br&gt;
In the image above, the items in the list represent functions that are supposed to be added to the file &lt;code&gt;ivy/functional/frontends/paddle/tensor/tensor.py&lt;/code&gt;. The items in green indicate that someone is currently working on them, and the purple items indicate that they have already been completed, so there's no need to work on them.&lt;/p&gt;

&lt;p&gt;You can choose any of the items in the list that are not marked. Select one that you are interested in and create an issue for it by following these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click on the "New Issue" button in the repository's issue tab.&lt;/li&gt;
&lt;li&gt;    Fill in the necessary details for the issue, including a descriptive title and a detailed description of the task you will be working on.&lt;/li&gt;
&lt;li&gt;    Once the issue is created, it will be assigned a number. Take note of this number, as you will need it in the next step.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9nv0se96e6inqg3aqsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9nv0se96e6inqg3aqsg.png" alt="Image description" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, comment on the link you provided, which is &lt;a href="https://github.com/unifyai/ivy/issues/15115"&gt;https://github.com/unifyai/ivy/issues/15115&lt;/a&gt;. In the comment, mention the number of the issue you created and any additional information or questions you may have.&lt;/p&gt;

&lt;p&gt;Now, you can start coding in the file &lt;code&gt;ivy/functional/frontends/paddle/tensor/tensor.py.&lt;/code&gt;If you find it difficult to understand the code, you can refer to the math.py file in the same folder for guidance. Additionally, you can copy and paste the code into the chatgpt for an explanation or if you need further assistance &lt;code&gt;mention me in IVY server&lt;/code&gt;. I will help.&lt;br&gt;
After adding a function in &lt;code&gt;ivy/functional/frontends/paddle/tensor/tensor.py&lt;/code&gt;, you need to create a test function in the file &lt;code&gt;ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_tensor.py.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once you have completed the coding, you should commit the code in your forked repository. After committing the code, go to your forked repository on GitHub. You will see a banner that says "Compare &amp;amp; pull request." Click on it to create a pull request.&lt;/p&gt;

&lt;p&gt;The pull request URL will look similar to this: &lt;code&gt;https://github.com/unifyai/ivy/pull/16040&lt;/code&gt;. After creating the pull request, it will be assigned to a reviewer. The reviewer may provide feedback or merge your pull request into the main branch.&lt;/p&gt;

&lt;p&gt;Once your pull request is merged, you can copy the URL of the merged pull request. Send this link to the mentioned email address, as shown in the following image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dyq160muj1gx2ob4xt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2dyq160muj1gx2ob4xt4.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then your coding challenge will be complete &lt;br&gt;
.&lt;br&gt;
&lt;strong&gt;Note:&lt;/strong&gt; Please be aware that you may encounter various issues while working on a real-world project. The complexity and nature of the project can lead to unexpected challenges. Don't get discouraged if you face difficulties along the way. Remember to seek help from the project community, consult documentation, and use available resources to overcome any obstacles you encounter. Stay persistent and keep learning throughout the process. here is link of IVY server &lt;a href="https://discord.gg/dhaMPrcC"&gt;https://discord.gg/dhaMPrcC&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;پڙهڻ لاءِ مهرباني&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;ختم ٿيو&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Thanks for reading&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;The End&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>github</category>
      <category>git</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
