Artificial Intelligence (AI) agents and Large Language Models (LLMs) are transforming how we interact with technology. From powering intelligent chatbots to automating complex workflows, these tools are becoming essential for developers and businesses alike. One of the easiest ways to dive into this space is by using Flowise, a low-code/no-code platform that simplifies building LLM-based applications using LangChain. In this blog, I’ll walk you through what AI agents and LLMs are, how to install Flowise locally, and how to create a simple chatbot project.
What Are AI Agents and LLMs?
AI agents are smart systems that can work on their own, make decisions, and interact with people or other software. Many of them use powerful tools called Large Language Models (LLMs) like OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, Meta’s LLaMA, or Mistral to understand and generate human-like text.
These LLMs are trained on massive amounts of data from the internet, books, and other sources, which allows them to handle a wide range of tasks. They can answer questions, translate languages, write emails or blog posts, generate code, and even act as helpful chat assistants, kind of like the one you're reading right now.
LangChain is like a toolbox that helps developers build smart apps using AI(like ChatGPT or Claude). It lets you connect different steps together, use data from other places, and even make the AI remember things. But it can be a bit hard to learn, especially if you're just starting out.
That’s where Flowise comes in. Flowise is a simple, drag-and-drop tool that lets you build AI apps without writing a lot of code. It does all the hard stuff behind the scenes, so you can focus on your ideas even if you’re not a tech expert.
Why Flowise?
Flowise is an open-source platform that lets you create LLM workflows and AI agents without writing extensive code. Its intuitive interface and pre-built templates make it ideal for prototyping chatbots, retrieval-augmented generation (RAG) systems, and more. Whether you’re a developer or a non-technical user, Flowise empowers you to experiment with AI quickly.
Step-by-Step Guide: Installing Flowise Locally and Building a Chatbot
Let’s set up Flowise locally and create a simple chatbot that responds to user queries using an LLM. For this project, we’ll use OpenAI’s API (you’ll need an API key) and build a conversational chatbot with Flowise.
Prerequisites
Before getting started, make sure you have the following:
- Node.js (v20 recommended) Flowise doesn't work with Node.js versions above v20, so it's best to use exactly v20. If you already have a newer version installed, don’t worry, you can manage different versions easily using NVM (Node Version Manager). Here’s how to install it:
For macOS/Linux:
nvm install 20
nvm use 20
For Windows:
Use nvm-windows and follow the installation instructions.
Then, in Command Prompt or PowerShell, run:
nvm install 20
nvm use 20
npm
This comes bundled with Node.js, so once Node is installed via NVM, you're good to go.OpenAI API Key
Sign up at platform.openai.com to get your API key.
⚠️ Important: You’ll need to add a payment method and buy credits to make API requests, it's a paid service.A Code Editor & Terminal
I recommend using VS Code and your system’s terminal or command line to run commands.
Step 1: Install Flowise Locally
Follow these steps to set up Flowise on your machine:
- Verify Node.js Installation Open your terminal and check the Node.js and npm versions:
node -v
npm -v
If not installed, download and install Node.js from the official website.
Install Flowise
In your terminal, run the following command to install Flowise globally:
npm install -g flowise
This installs the Flowise CLI, which allows you to start the Flowise server.Start Flowise
Run the following command to launch Flowise:
npx flowise start
After a moment, you’ll see a message indicating the server is running at http://localhost:3000. Open this URL in your web browser to access the Flowise interface.
The Flowise dashboard after launching at http://localhost:3000. Arrow points to the “Add New” button to start a new project.
Step 2: Create a Simple Chatbot in Flowise
Now that Flowise is running, let’s build a basic chatbot using a Conversational Chain.
Now that Flowise is running, let’s build a basic chatbot using a Conversational Chain.
Now that Flowise is running, let’s build a basic chatbot using a Conversational Chain.
- Access the Flowise UI Navigate to http://localhost:3000 in your browser to see the Flowise dashboard (shown above).
-
Create a New Project
- Click Add New to start a new project. This opens an empty canvas where you can build your chatbot.
- In the top-left corner, click the + icon to view available nodes (components from LangChain).
-
Add Nodes For The Chatbot
- From the Chains group, drag a Conversation Chain node onto the canvas. This node handles back-and-forth interactions with the LLM.
- From the Chat Models section, drag the ChatOpenAI node onto your canvas. This node acts as the bridge between your Flowise project and OpenAI’s API. Next, connect the output of the ChatOpenAI node to the Conversational Chain node. This setup allows your AI to process messages, keep track of the conversation, and respond intelligently.
- From the Memory section, drag the Buffer Memory node onto the canvas. This node stores the conversation history, allowing the AI to remember previous messages and respond with better context.
The Flowise canvas with Conversation Chain, ChatOpenAI, and Buffer Memory nodes. Arrows point to the nodes and their connections.
-
Configure the Nodes
This is where you set up each node to work together, connect your OpenAI API key, and fine-tune your chatbot’s behavior.
-
ChatOpen AI Node:
- Click the node, select Connect Credential > Create New, and enter your OpenAI API key.
- Set the Model Name to gpt-4o-mini(or another model of your choice).
-
Buffer Memory: Set the memory key to match the key used in your Conversational Chain (usually chat_history). The values stored here are the previous messages from the conversation, which the agent can use as context to give more relevant and natural responses.
Buffer Window Memory is set with a size of 20, meaning it will store the last 20 messages from the conversation so the AI can use them as context.
-
ChatOpen AI Node:
Once your Buffer Window Memory is set up and connected to the ChatOpenAI node, click the purple chat icon in the top-right corner to start talking to your chatbot.
Chatbot in action, meeting Robert, the witty assistant who’s always ready to talk (and maybe nap if you’re boring).
Customizing Your Chatbot’s Personality
In the Additional Parameters section, you can modify the Chat Prompt Template to define how your chatbot behaves. This template acts like the chatbot’s “personality script,” letting you decide its tone, style, and focus. For example, you could make it friendly and casual, professional and concise, or even humorous and quirky. The instructions you put here guide how it responds in every conversation, so be as specific as you like to match your use case.
The Flowise canvas during chatbot configuration, with a focus on the Conversation Chain node. An arrow can point to the Additional Parameters section, highlighting the System Message where a witty AI persona (named Robert) is defined.
Testing the chatbot in Flowise’s chat interface to verify the parameter configuration.
In the Settings tab, you can go beyond just personality tweaks. Add starter prompts to kick off the chat, follow-up prompts to guide the conversation, and even enable chat feedback so users can rate your bot’s replies. You can also turn on features like file uploads, custom instructions, or language settings making your chatbot smarter, friendlier, and more useful from the start.
Deploy Your Chatbot in Flowise
Once your chatbot is built and tested inside Flowise, you don’t have to keep it trapped on your local machine. You can share it with the world or just with a select group by using Flowise’s various deployment options:
The Flowise interface displaying the share options for your chatbot project. An arrow can point to the Share button, indicating where you can generate an embed code or API endpoint to share your creation.
The flowise interface showing the various sharing options available.
-
Embed Code:
- Copy the HTML/JavaScript snippet provided in the deployment settings.
- Paste this code into your own website, blog, or web app.
- The chatbot will appear directly on your site, no need for users to visit Flowise.
-
API Endpoint:
- Copy the API URL and authentication key from the deployment panel.
- Use this if you want to integrate the chatbot into a custom app, mobile app, or other services.
-
Public Link:
- Click Generate Link to get a direct URL to your chatbot.
- Send this link to anyone you want, they can open it in their browser and chat instantly.
- Best for quick demos or sharing with team members.
"Hooray! The chatbot is live on its public link, our AI baby has officially left the nest."
Conclusion
And there you have it, from setting up Flowise to chatting with your very own AI creation, you’ve gone from curious tinkerer to chatbot builder in just a few steps.
Whether you keep your bot as a personal sidekick, embed it on your site, or unleash it to the public, the real magic is in how you customize it. Memory nodes, custom prompts, and behavioral tweaks turn a basic AI into something truly unique and uniquely yours.
So go ahead: experiment, break things, fix them again, and most importantly, have fun with it. Because the future of AI isn’t just about big companies and billion-dollar labs, it’s also about creators like you, building something smart, funny, and maybe even a little weird.
Now, your chatbot is live. The only question left is, what’s the first thing you’ll ask it? 😉
Top comments (0)