<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bimochan raj kunwar</title>
    <description>The latest articles on DEV Community by Bimochan raj kunwar (@rbimochan).</description>
    <link>https://dev.to/rbimochan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rbimochan"/>
    <language>en</language>
    <item>
      <title>Understanding the AI/ML Flow From Data to Deployment</title>
      <dc:creator>Bimochan raj kunwar</dc:creator>
      <pubDate>Tue, 30 Dec 2025 15:16:57 +0000</pubDate>
      <link>https://dev.to/rbimochan/full-stack-ai-development-11dg</link>
      <guid>https://dev.to/rbimochan/full-stack-ai-development-11dg</guid>
      <description>&lt;p&gt;The MLOps Philosophy: A Structured Approach&lt;br&gt;
In professional circles, the entire lifecycle of an ML project is often referred to as MLOps (Machine Learning Operations). It's a discipline that streamlines the process from experimentation to production, ensuring reliability and scalability. We can break this down into three distinct, interconnected "zones."&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Data &amp;amp; Training Zone: The "Laboratory"
This is where the intelligence is forged. Before a model can be "smart," it needs to learn from vast amounts of information. You correctly identified that a database plays a crucial role here, but it's more than just a storage unit; it's the raw material.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Data Collection &amp;amp; Ingestion: The very first step involves gathering raw data. This could be from traditional SQL or NoSQL databases, cloud storage buckets, real-time streams, or third-party APIs. Think of it as sourcing your ingredients.&lt;/p&gt;

&lt;p&gt;Data Preprocessing &amp;amp; Feature Engineering: Raw data is often messy. This crucial phase involves:&lt;/p&gt;

&lt;p&gt;Cleaning: Handling missing values, correcting errors, and removing duplicates.&lt;/p&gt;

&lt;p&gt;Transformation: Converting data into a format suitable for algorithms (e.g., scaling numbers, encoding categorical variables).&lt;/p&gt;

&lt;p&gt;Feature Engineering: This is an art form! It involves creating new, more informative features from existing ones to help the model learn better. An example might be combining day_of_week and hour_of_day to create a time_of_day_category.&lt;/p&gt;

&lt;p&gt;Model Training: With clean, well-engineered data, an algorithm (like a neural network, decision tree, or regression model) "studies" these patterns. It adjusts its internal parameters to minimize errors between its predictions and the actual outcomes in the training data.&lt;/p&gt;

&lt;p&gt;The Model Artifact: The output of a successful training run isn't a live entity; it's a static file. This file, often called a "model artifact" (e.g., a .pkl for scikit-learn, .h5 for Keras, or .onnx for optimized runtime), encapsulates all the learned intelligence. This is the "brain" ready for deployment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A visual representation of data flowing through a cleaning and training pipeline, culminating in a model artifact file.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Serving Zone: The "API Bridge"
You've built a brilliant "brain" (your model artifact). Now, how do other applications communicate with it? This is where the API (Application Programming Interface) comes in. The model isn't created in the API; rather, the API acts as a universal translator and gateway to your model.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Model Hosting &amp;amp; Loading: The first step is to load your model artifact into a dedicated server or a cloud-based ML serving platform. Frameworks like FastAPI, Flask, or more specialized tools like TensorFlow Serving or TorchServe are commonly used here.&lt;/p&gt;

&lt;p&gt;API Endpoint Creation: The serving layer exposes a specific URL (an "endpoint"), for example, &lt;a href="https://api.yourapp.com/predict" rel="noopener noreferrer"&gt;https://api.yourapp.com/predict&lt;/a&gt; or &lt;a href="https://your-ml-service.cloud.com/sentiment" rel="noopener noreferrer"&gt;https://your-ml-service.cloud.com/sentiment&lt;/a&gt;. This is the address other applications will use to send data and receive predictions.&lt;/p&gt;

&lt;p&gt;Inference: When an external application sends new data to this API endpoint, the serving layer performs the magic:&lt;/p&gt;

&lt;p&gt;It receives the input data (e.g., text, image, numbers).&lt;/p&gt;

&lt;p&gt;It preprocesses this incoming data in the same way the training data was processed (crucial for consistency!).&lt;/p&gt;

&lt;p&gt;It passes the processed data to the loaded model artifact.&lt;/p&gt;

&lt;p&gt;The model makes a prediction (this is called "inference").&lt;/p&gt;

&lt;p&gt;The serving layer formats the prediction (often as JSON) and sends it back as the API response.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A diagram illustrating an API gateway receiving a request, forwarding it to a loaded model, and sending back a JSON response.&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Application Zone: The "User Interface"
This is where the magic becomes tangible for the end-user. This zone leverages the served ML model within a broader application context.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Backend (The Application Logic): The backend of your application is responsible for orchestrating much more than just the ML model. It handles:&lt;/p&gt;

&lt;p&gt;CRUD Operations: Managing user accounts, storing user-specific data, and logging model predictions for auditing or feedback.&lt;/p&gt;

&lt;p&gt;Business Logic: Implementing rules specific to your application (e.g., "if the prediction is X, then do Y").&lt;/p&gt;

&lt;p&gt;Security: Authentication, authorization, and data encryption.&lt;/p&gt;

&lt;p&gt;Integration: Communicating with the ML API, other internal services, and external APIs.&lt;/p&gt;

&lt;p&gt;quoted text: The backend basically handles how the model serves its object. It acts as the brain for the entire application, deciding when and how to call the ML API.&lt;/p&gt;

&lt;p&gt;Frontend (The User Experience): This is what your users interact with—the mobile app, the web interface, or a desktop application.&lt;/p&gt;

&lt;p&gt;It collects user input (e.g., text for sentiment analysis, an image for object detection).&lt;/p&gt;

&lt;p&gt;It sends this input to your backend (which, in turn, might call the ML API).&lt;/p&gt;

&lt;p&gt;It receives the model's prediction (via the backend) and displays it in a user-friendly manner.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;An illustration showing a user interacting with a mobile app (frontend), which communicates with a backend, and that backend then calls the ML API to get a prediction.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Rewritten Summary for Your Notes&lt;br&gt;
For a concise, professional summary to use in discussions or on your resume, consider this:&lt;/p&gt;

&lt;p&gt;"The AI/ML flow starts with Data Engineering to prepare raw data. This leads into Model Development, where an algorithm is trained to produce a 'Model Artifact.' This artifact is then Deployed within an API (the Serving Layer) to enable real-time predictions. Finally, the Application Layer (comprising a Backend for business logic and a Frontend for user interaction) consumes this API to deliver an intelligent experience to the end-user, while managing data persistence and security."&lt;/p&gt;

&lt;p&gt;Key Takeaway: Separation of Concerns&lt;br&gt;
The most critical refinement to your initial understanding is the separation between the model's training phase and its deployment/serving phase. You train a model once (or periodically), save it, and then load that saved model into an API to serve countless requests. This distinction is vital for scalability, maintainability, and efficient resource utilization.&lt;/p&gt;

&lt;p&gt;Understanding this flow is your superpower. While specific programming languages, cloud providers, and machine learning frameworks will change, the fundamental stages of collecting data, training models, serving predictions, and integrating into applications will remain constant.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>devops</category>
    </item>
    <item>
      <title>Google Scholar. for University Scholar Day-1</title>
      <dc:creator>Bimochan raj kunwar</dc:creator>
      <pubDate>Mon, 29 Dec 2025 14:39:10 +0000</pubDate>
      <link>https://dev.to/rbimochan/google-scholar-for-university-scholar-day-1-c82</link>
      <guid>https://dev.to/rbimochan/google-scholar-for-university-scholar-day-1-c82</guid>
      <description>&lt;p&gt;I’m trying to build a similar Google Scholar for my university. Do you have any ideas?&lt;br&gt;
This is my college project. We are assigned to build a pretty basic search engine using a crawler like Selenium. Then, I thought to myself, “Why stop there? Making a generic project won’t make me shine. So, I started researching.”&lt;/p&gt;

&lt;p&gt;What is a search engine, and why is it important? One thing leads to another, and I found out there’s another name for a search engine now. That’s what Perplexity is basically. Then, I heard the CEO of Perplexity in Lex. He talked about how Google’s search engine isn’t its primary source of revenue. It also earns YouTube alone 100 billion dollars annually. Back to search engines, I went off-topic. My aim is also to build a billion-dollar company someday. LOL.  &lt;/p&gt;

&lt;p&gt;So, search engines let us go back to where vector search naive theory and different algorithms were used previously. Now, Google uses an algorithm with BM25 or BM22, which I need to check on.&lt;/p&gt;

&lt;p&gt;I had an idea: how about I build a search engine and answer engine, naming it Sonic? Sonic crawlers will search all the time and rate the webpage. This is an added layer for better ranking. The crawlers must be pretty unbiased, must be.&lt;/p&gt;

&lt;p&gt;However, I started building this Sonic search engine. It did get “AI” and “all the help I can get.” Then I realized that if I continued doing this, I have this tendency of overloading my ideas so much that I can’t carry myself and drop it and forget about it. The same thing was going to happen, so I only thought about creating an assignment-worthy project for now and later adding features. This way, I can make it exist and make it better later.&lt;/p&gt;

&lt;p&gt;I have this highly distracted yet incredibly curious mind. I believe it’s only me now. I’m learning the basics of algorithms, mathematics, and programming that are necessary to complete this project.&lt;/p&gt;

&lt;p&gt;Project Name: University Scholar&lt;/p&gt;

&lt;p&gt;The project will be available on GitHub after I finish it. I’ll post it here daily.&lt;/p&gt;

&lt;p&gt;Tech Stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: Next.js&lt;/li&gt;
&lt;li&gt;Backend: FastAPI (most of the AI stuff happens here)&lt;/li&gt;
&lt;li&gt;Data Indexing: Elasticsearch&lt;/li&gt;
&lt;li&gt;AI/NLP: I want to add an answering feature as well.&lt;/li&gt;
&lt;li&gt;Crawler: Selenium&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’ve completed the frontend using Next.js. I’ll be updating it daily. I’m having issues with Docker and FastAPI. &lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>ai</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
