DEV Community

Alain Airom
Alain Airom

Posted on

Building Custom Components in Langflow 🛠️

I made two Python custom components in Langflow 🤷‍♂️

TL;DR — What is LangFlow

(Excerpt from Langflow’s site and GitHub repository)

Langflow is a powerful tool to build and deploy AI agents and MCP servers. It comes with batteries included and supports all major LLMs, vector databases and a growing library of AI tools.

✨ Highlight features

  • Visual builder interface to quickly get started and iterate.
  • Source code access lets you customize any component using Python.
  • Interactive playground to immediately test and refine your flows with step-by-step control.
  • Multi-agent orchestration with conversation management and retrieval.
  • Deploy as an API or export as JSON for Python apps.
  • Deploy as an MCP server and turn your flows into tools for MCP clients.
  • Observability with LangSmith, LangFuse and other integrations.
  • Enterprise-ready security and scalability.

Langflow’s strikingly clear and intuitive interface revolutionizes the no-code development landscape, empowering users to effortlessly create sophisticated LLM-based applications, intelligent agents, and dynamic RAG flows. Its streamlined design simplifies complex processes, allowing both seasoned developers and newcomers to quickly bring their AI ideas to life without writing a single line of code. This accessibility fosters rapid prototyping and innovation, making advanced AI capabilities available to a broader audience.

Creating Custom Components

Why should one wants to create custom components?

Excerpt from documentation;

Custom components extend Langflow’s functionality through Python classes that inherit from Component. This enables integration of new features, data manipulation, external services, and specialized tools.

Having set the basis, after a discussion with one of my colleagues regarding the specific needs of a customer, I decided to try building a custom component in Python! 🫨

Hello World Component

The first component is a sort of “hello world” (well actually “hello Alain Working”).

I followed the documentation, which describes clearly the steps.

  • Build a folder for you Python code.
  • Prepare the virtual Python environment as usual.

I had to install Python 3.12 using brew alongside with my Python 3.14 for comability issues.

python3.12 -m venv lf_venv

source lf_venv/bin/activate


pip install --upgrade pip

pip install ollama
pip install langflow
pip install langchain
# pip install langchain-community
Enter fullscreen mode Exit fullscreen mode
  • Build the component path as an environment varible.
export LANGFLOW_COMPONENTS_PATH="/Users/xxxx/Devs/langflow-custom"
Enter fullscreen mode Exit fullscreen mode
  • Build a sub-folder for your component (excerpt from Langflow documentation)
mkdir /Users/xxxx/Devs/langflow-custom/Utility
Enter fullscreen mode Exit fullscreen mode
#### this is how it would look like

/your/custom/components/path/    # Base directory set by LANGFLOW_COMPONENTS_PATH
    └── category_name/          # Required category subfolder that determines menu name
        ├── __init__.py         # Required
        └── custom_component.py # Component file

Enter fullscreen mode Exit fullscreen mode
touch /path/folder/__init.py__
##
touch /your/custom/components/path/category_name/__init.py__
Enter fullscreen mode Exit fullscreen mode
  • The code of the custom component (hello_alain_component.py)👇
# hello_alain_component.py
from langflow.custom import CustomComponent
from langflow.io import Output
from langflow.schema import Message 
from typing import Dict, Any

class HelloAlainComponent(CustomComponent):
   display_name = "Hello Alain WORKING" 
    description = "A simple component that outputs a fixed greeting using the required dual-method structure."
    icon = "MessageSquare" 
    name = "HelloAlainComponent"

    # --- Component Inputs ---
    inputs = []

    outputs = [
        Output(
            name="greeting", 
            display_name="Greeting Message",
            method="build", 
            output_type=Message 
        ),
    ]

    def build(self) -> Message: 
        """
        Generates and returns the fixed greeting message.
        """

        greeting_text = "Hello Alain! Your component is here!"
        self.status = "Greeting generated."

        final_message = Message(
            text=greeting_text, 
            sender="System",
            sender_name=self.display_name
        )

        return final_message

    async def build_results(self) -> tuple[Dict[str, Any], Dict[str, Any]]: 
        """
        Satisfies the runner's requirement for async execution and tuple return,
        while ensuring the first element is a dictionary to prevent the .items() error.
        """

        message_result = self.build()

        primary_result_dict = {"text": message_result.text} 

        artifacts = {} 

        return primary_result_dict, artifacts
Enter fullscreen mode Exit fullscreen mode
  • From the parent folder run Langlow (installed previously by Pip)
> langflow run
✓ Initializing Langflow...
✓ Checking Environment...
▣ Starting Core Services...2025-11-04T15:28:04.442472Z [warning  ] DEPRECATION NOTICE: Starting in v1.7, CORS will be more restrictive by default. Current behavior allows all origins (*) with credentials enabled. Consider setting LANGFLOW_CORS_ORIGINS for production deployments. See documentation for secure CORS configuration.
2025-11-04T15:28:04.442649Z [warning  ] SECURITY NOTICE: Current CORS configuration allows all origins with credentials. In v1.7, credentials will be automatically disabled when using wildcard origins. Specify exact origins in LANGFLOW_CORS_ORIGINS to use credentials securely.
✓ Starting Core Services
✓ Connecting Database...
✓ Loading Components...
✓ Adding Starter Projects...
□ Launching Langflow...2025-11-04T15:28:07.834862Z [error    ] Error importing module langflow.components.composio.googletasks_composio: No module named 'base'
▢ Launching Langflow...2025-11-04T15:28:15.923981Z [error    ] Error while getting output types from code
2025-11-04T15:28:15.924597Z [error    ] Error while getting output types from code
✓ Launching Langflow...

╭─────────────────────────────────────────────────────────────────────────╮
│                                                                         │
│  Welcome to Langflow                                                    │
│                                                                         │
│  🌟 GitHub: Star for updates → https://github.com/langflow-ai/langflow  │
│  💬 Discord: Join for support → https://discord.com/invite/EqksyE2EX9   │
│                                                                         │
│  We collect anonymous usage data to improve Langflow.                   │
│  To opt out, set: DO_NOT_TRACK=true in your environment.                │
│                                                                         │
│  🟢 Open Langflow → http://localhost:7860                               │
│                                                                         │
╰─────────────────────────────────────────────────────────────────────────╯
Enter fullscreen mode Exit fullscreen mode
  • Build a new flow and drop your component which is visible on the left components menu, and run it. If the execution time in green is displayed you’re all done! ✌️

  • Stop Langflow by the ctrl+c key combination.
✓ Stopping Server...
✓ Cancelling Background Tasks...
✓ Cleaning Up Services...
✓ Clearing Temporary Files...
✓ Finalizing Shutdown...

👋 See you next time!
Enter fullscreen mode Exit fullscreen mode

So what happened here?

Implementation: The component’s code defines a class, HelloAlainComponent, that inherits from CustomComponent and implements two methods, build and build_results, to handle both UI visualization and runner execution.

Component Metadata and Output Declaration: The component starts with standard metadata and a single output definition:

  • Class and Inheritance: class HelloAlainComponent(CustomComponent):
  • Metadata: Sets the visual identity (display_name, description, icon, name).
  • Outputs: Defines a single output port named "greeting" with the type Message. Crucially, the method parameter is set to "build". This tells the Langflow UI that the output type is determined by the synchronous def build() method, which makes the port correctly colored and connectable
# Output declaration for UI connection
outputs = [
    Output(
        name="greeting", 
        display_name="Greeting Message",
        method="build", # Connects to the def build() method
        output_type=Message 
    ),
]
Enter fullscreen mode Exit fullscreen mode

The Core Logic (def build): This synchronous method performs the actual work — creating the Message object. In this case, the work is simply generating a static string.

  • It creates the text: "Hello Alain!"
  • It packages this text into a Langflow Message object.
  • It sets the component’s internal self.status for runtime feedback in the flow.
  • It returns a single Message object. This single return satisfies the UI's connection type requirement.
def build(self) -> Message: 
    # ... logic to create final_message ...
    return final_message
Enter fullscreen mode Exit fullscreen mode

The Runner Compatibility Layer (async def build_results):

async def build_results(self) -> tuple[Dict[str, Any], Dict[str, Any]]: 
    # 1. Execute the core logic
    message_result = self.build()

    # 2. Wrap the text in a dict to satisfy the runner's .items() check
    primary_result_dict = {"text": message_result.text} 
    artifacts = {} 

    # 3. Return the required (Dict, Dict) tuple
    return primary_result_dict, artifacts
Enter fullscreen mode Exit fullscreen mode

Real Business Component

If you want to build a serious, business oriented ccomponent, as I tried (by reproducing from an existing one)…

  • Build a new sub-folder, with the desired category and name, and initialize the Python environmnet by building the “`_initi.py_`” template file. In my case I tried to build my own ‘Ollama’ component.
from langflow.custom import CustomComponent
from langflow.inputs import StrInput, FloatInput 
from langflow.io import Output, MessageTextInput 
from langflow.schema import Message 
from langchain_community.llms import Ollama 
from typing import Optional, Dict, Any

class OllamaLLMComponent(CustomComponent):
    display_name = "Ollama LLM AAM" 
    description = "Custom Ollama component - AAM."
    icon = "Ollama"
    name = "OllamaLLMComponent"

    inputs = [
        StrInput(name="base_url", display_name="Base URL", info="Endpoint of the Ollama API.", value="http://localhost:11434", advanced=True),
        StrInput(name="model_name", display_name="Model Name", info="The name of the Ollama model to use.", value="granite4:latest", required=True),
        FloatInput(name="temperature", display_name="Temperature", info="Controls randomness.", value=0.7, advanced=True),
        MessageTextInput( 
            name="prompt", 
            display_name="Prompt (Input)", 
            info="The prompt to send to the LLM.",
            required=True,
        ),
    ]

    outputs = [
        Output(
            name="model_response", 
            display_name="Model Response",
            method="build", 
            output_type=Message 
        ),
    ]

    def build(self) -> Message: 
        """
        Performs the core LLM execution and returns the final Message object.
        """

        # 0. Get Parameters
        model_name = getattr(self, "model_name", "granite4:latest")
        base_url = getattr(self, "base_url", "http://localhost:11434")
        temperature = getattr(self, "temperature", 0.7)
        prompt_value = getattr(self, "prompt", None)

        if not prompt_value:
             prompt_text = "Tell me one interesting fact about the model used: " + model_name + "."
             self.status = "Input disconnected. Using hardcoded test prompt."
        else:
            prompt_text = getattr(prompt_value, 'text', str(prompt_value))
            self.status = ""

        # 1. Initialize and Invoke the LLM
        try:
            llm_instance = Ollama(
                model=model_name, 
                base_url=base_url, 
                temperature=temperature
            )
            response_text = llm_instance.invoke(prompt_text) 

        except Exception as e:
            error_msg = f"Ollama Execution Error: Check URL and Model Name. Details: {e}"
            self.status = error_msg
            raise ValueError(error_msg)

        final_message = Message(
            text=response_text, 
            sender="LLM",
            sender_name=model_name
        )

        return final_message

    async def build_results(self) -> tuple[Dict[str, Any], Dict[str, Any]]: 
        """
        Satisfies the runner's requirement for async execution and tuple return,
        ensuring the first element is a dictionary-like object (Dict).
        """

        message_result = self.build()

        primary_result_dict = {"text": message_result.text} 

        artifacts = {} 

        return primary_result_dict, artifacts
Enter fullscreen mode Exit fullscreen mode
  • And it worked (after several attemps 🫂)

  • All my components are visible from the side bar navigation menu.

  • And my Python folder’s structure

Appendix

In order to get inspired and/or see the origial code of a component, just drag the component on your flow and click on code.

Below, is the original code for the “Ollama” component.

import asyncio
from typing import Any
from urllib.parse import urljoin

import httpx
from langchain_ollama import ChatOllama

from langflow.base.models.model import LCModelComponent
from langflow.base.models.ollama_constants import URL_LIST
from langflow.field_typing import LanguageModel
from langflow.field_typing.range_spec import RangeSpec
from langflow.io import BoolInput, DictInput, DropdownInput, FloatInput, IntInput, MessageTextInput, SliderInput
from langflow.logging import logger

HTTP_STATUS_OK = 200


class ChatOllamaComponent(LCModelComponent):
    display_name = "Ollama"
    description = "Generate text using Ollama Local LLMs."
    icon = "Ollama"
    name = "OllamaModel"

    # Define constants for JSON keys
    JSON_MODELS_KEY = "models"
    JSON_NAME_KEY = "name"
    JSON_CAPABILITIES_KEY = "capabilities"
    DESIRED_CAPABILITY = "completion"
    TOOL_CALLING_CAPABILITY = "tools"

    inputs = [
        MessageTextInput(
            name="base_url",
            display_name="Base URL",
            info="Endpoint of the Ollama API.",
            value="",
            real_time_refresh=True,
        ),
        DropdownInput(
            name="model_name",
            display_name="Model Name",
            options=[],
            info="Refer to https://ollama.com/library for more models.",
            refresh_button=True,
            real_time_refresh=True,
        ),
        SliderInput(
            name="temperature",
            display_name="Temperature",
            value=0.1,
            range_spec=RangeSpec(min=0, max=1, step=0.01),
            advanced=True,
        ),
        MessageTextInput(
            name="format", display_name="Format", info="Specify the format of the output (e.g., json).", advanced=True
        ),
        DictInput(name="metadata", display_name="Metadata", info="Metadata to add to the run trace.", advanced=True),
        DropdownInput(
            name="mirostat",
            display_name="Mirostat",
            options=["Disabled", "Mirostat", "Mirostat 2.0"],
            info="Enable/disable Mirostat sampling for controlling perplexity.",
            value="Disabled",
            advanced=True,
            real_time_refresh=True,
        ),
        FloatInput(
            name="mirostat_eta",
            display_name="Mirostat Eta",
            info="Learning rate for Mirostat algorithm. (Default: 0.1)",
            advanced=True,
        ),
        FloatInput(
            name="mirostat_tau",
            display_name="Mirostat Tau",
            info="Controls the balance between coherence and diversity of the output. (Default: 5.0)",
            advanced=True,
        ),
        IntInput(
            name="num_ctx",
            display_name="Context Window Size",
            info="Size of the context window for generating tokens. (Default: 2048)",
            advanced=True,
        ),
        IntInput(
            name="num_gpu",
            display_name="Number of GPUs",
            info="Number of GPUs to use for computation. (Default: 1 on macOS, 0 to disable)",
            advanced=True,
        ),
        IntInput(
            name="num_thread",
            display_name="Number of Threads",
            info="Number of threads to use during computation. (Default: detected for optimal performance)",
            advanced=True,
        ),
        IntInput(
            name="repeat_last_n",
            display_name="Repeat Last N",
            info="How far back the model looks to prevent repetition. (Default: 64, 0 = disabled, -1 = num_ctx)",
            advanced=True,
        ),
        FloatInput(
            name="repeat_penalty",
            display_name="Repeat Penalty",
            info="Penalty for repetitions in generated text. (Default: 1.1)",
            advanced=True,
        ),
        FloatInput(name="tfs_z", display_name="TFS Z", info="Tail free sampling value. (Default: 1)", advanced=True),
        IntInput(name="timeout", display_name="Timeout", info="Timeout for the request stream.", advanced=True),
        IntInput(
            name="top_k", display_name="Top K", info="Limits token selection to top K. (Default: 40)", advanced=True
        ),
        FloatInput(name="top_p", display_name="Top P", info="Works together with top-k. (Default: 0.9)", advanced=True),
        BoolInput(name="verbose", display_name="Verbose", info="Whether to print out response text.", advanced=True),
        MessageTextInput(
            name="tags",
            display_name="Tags",
            info="Comma-separated list of tags to add to the run trace.",
            advanced=True,
        ),
        MessageTextInput(
            name="stop_tokens",
            display_name="Stop Tokens",
            info="Comma-separated list of tokens to signal the model to stop generating text.",
            advanced=True,
        ),
        MessageTextInput(
            name="system", display_name="System", info="System to use for generating text.", advanced=True
        ),
        BoolInput(
            name="tool_model_enabled",
            display_name="Tool Model Enabled",
            info="Whether to enable tool calling in the model.",
            value=True,
            real_time_refresh=True,
        ),
        MessageTextInput(
            name="template", display_name="Template", info="Template to use for generating text.", advanced=True
        ),
        *LCModelComponent._base_inputs,
    ]

    def build_model(self) -> LanguageModel:  # type: ignore[type-var]
        # Mapping mirostat settings to their corresponding values
        mirostat_options = {"Mirostat": 1, "Mirostat 2.0": 2}

        # Default to 0 for 'Disabled'
        mirostat_value = mirostat_options.get(self.mirostat, 0)

        # Set mirostat_eta and mirostat_tau to None if mirostat is disabled
        if mirostat_value == 0:
            mirostat_eta = None
            mirostat_tau = None
        else:
            mirostat_eta = self.mirostat_eta
            mirostat_tau = self.mirostat_tau

        # Mapping system settings to their corresponding values
        llm_params = {
            "base_url": self.base_url,
            "model": self.model_name,
            "mirostat": mirostat_value,
            "format": self.format,
            "metadata": self.metadata,
            "tags": self.tags.split(",") if self.tags else None,
            "mirostat_eta": mirostat_eta,
            "mirostat_tau": mirostat_tau,
            "num_ctx": self.num_ctx or None,
            "num_gpu": self.num_gpu or None,
            "num_thread": self.num_thread or None,
            "repeat_last_n": self.repeat_last_n or None,
            "repeat_penalty": self.repeat_penalty or None,
            "temperature": self.temperature or None,
            "stop": self.stop_tokens.split(",") if self.stop_tokens else None,
            "system": self.system,
            "tfs_z": self.tfs_z or None,
            "timeout": self.timeout or None,
            "top_k": self.top_k or None,
            "top_p": self.top_p or None,
            "verbose": self.verbose,
            "template": self.template,
        }

        # Remove parameters with None values
        llm_params = {k: v for k, v in llm_params.items() if v is not None}

        try:
            output = ChatOllama(**llm_params)
        except Exception as e:
            msg = (
                "Unable to connect to the Ollama API. ",
                "Please verify the base URL, ensure the relevant Ollama model is pulled, and try again.",
            )
            raise ValueError(msg) from e

        return output

    async def is_valid_ollama_url(self, url: str) -> bool:
        try:
            async with httpx.AsyncClient() as client:
                return (await client.get(urljoin(url, "api/tags"))).status_code == HTTP_STATUS_OK
        except httpx.RequestError:
            return False

    async def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None):
        if field_name == "mirostat":
            if field_value == "Disabled":
                build_config["mirostat_eta"]["advanced"] = True
                build_config["mirostat_tau"]["advanced"] = True
                build_config["mirostat_eta"]["value"] = None
                build_config["mirostat_tau"]["value"] = None

            else:
                build_config["mirostat_eta"]["advanced"] = False
                build_config["mirostat_tau"]["advanced"] = False

                if field_value == "Mirostat 2.0":
                    build_config["mirostat_eta"]["value"] = 0.2
                    build_config["mirostat_tau"]["value"] = 10
                else:
                    build_config["mirostat_eta"]["value"] = 0.1
                    build_config["mirostat_tau"]["value"] = 5

        if field_name in {"base_url", "model_name"}:
            if build_config["base_url"].get("load_from_db", False):
                base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url")
            else:
                base_url_value = build_config["base_url"].get("value", "")

            if not await self.is_valid_ollama_url(base_url_value):
                # Check if any URL in the list is valid
                valid_url = ""
                check_urls = URL_LIST
                if self.base_url:
                    check_urls = [self.base_url, *URL_LIST]
                for url in check_urls:
                    if await self.is_valid_ollama_url(url):
                        valid_url = url
                        break
                if valid_url != "":
                    build_config["base_url"]["value"] = valid_url
                else:
                    msg = "No valid Ollama URL found."
                    raise ValueError(msg)
        if field_name in {"model_name", "base_url", "tool_model_enabled"}:
            if await self.is_valid_ollama_url(self.base_url):
                tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
                build_config["model_name"]["options"] = await self.get_models(
                    self.base_url, tool_model_enabled=tool_model_enabled
                )
            elif await self.is_valid_ollama_url(build_config["base_url"].get("value", "")):
                tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
                build_config["model_name"]["options"] = await self.get_models(
                    build_config["base_url"].get("value", ""), tool_model_enabled=tool_model_enabled
                )
            else:
                build_config["model_name"]["options"] = []
        if field_name == "keep_alive_flag":
            if field_value == "Keep":
                build_config["keep_alive"]["value"] = "-1"
                build_config["keep_alive"]["advanced"] = True
            elif field_value == "Immediately":
                build_config["keep_alive"]["value"] = "0"
                build_config["keep_alive"]["advanced"] = True
            else:
                build_config["keep_alive"]["advanced"] = False

        return build_config

    async def get_models(self, base_url_value: str, *, tool_model_enabled: bool | None = None) -> list[str]:
        """Fetches a list of models from the Ollama API that do not have the "embedding" capability.

        Args:
            base_url_value (str): The base URL of the Ollama API.
            tool_model_enabled (bool | None, optional): If True, filters the models further to include
                only those that support tool calling. Defaults to None.

        Returns:
            list[str]: A list of model names that do not have the "embedding" capability. If
                `tool_model_enabled` is True, only models supporting tool calling are included.

        Raises:
            ValueError: If there is an issue with the API request or response, or if the model
                names cannot be retrieved.
        """
        try:
            # Normalize the base URL to avoid the repeated "/" at the end
            base_url = base_url_value.rstrip("/") + "/"

            # Ollama REST API to return models
            tags_url = urljoin(base_url, "api/tags")

            # Ollama REST API to return model capabilities
            show_url = urljoin(base_url, "api/show")

            async with httpx.AsyncClient() as client:
                # Fetch available models
                tags_response = await client.get(tags_url)
                tags_response.raise_for_status()
                models = tags_response.json()
                if asyncio.iscoroutine(models):
                    models = await models
                await logger.adebug(f"Available models: {models}")

                # Filter models that are NOT embedding models
                model_ids = []
                for model in models[self.JSON_MODELS_KEY]:
                    model_name = model[self.JSON_NAME_KEY]
                    await logger.adebug(f"Checking model: {model_name}")

                    payload = {"model": model_name}
                    show_response = await client.post(show_url, json=payload)
                    show_response.raise_for_status()
                    json_data = show_response.json()
                    if asyncio.iscoroutine(json_data):
                        json_data = await json_data
                    capabilities = json_data.get(self.JSON_CAPABILITIES_KEY, [])
                    await logger.adebug(f"Model: {model_name}, Capabilities: {capabilities}")

                    if self.DESIRED_CAPABILITY in capabilities and (
                        not tool_model_enabled or self.TOOL_CALLING_CAPABILITY in capabilities
                    ):
                        model_ids.append(model_name)

        except (httpx.RequestError, ValueError) as e:
            msg = "Could not get model names from Ollama."
            raise ValueError(msg) from e

        return model_ids
Enter fullscreen mode Exit fullscreen mode

✍️ Conclusion: Unlocking Infinite Possibilities

Langflow stands out as more than just a powerful visual tool; it’s a genuine accelerator for building robust, industrialized AI applications — encompassing agentic workflows, advanced RAG systems, and complex LLM pipelines — for both coders and non-coders alike. While its core component library is comprehensive, the true genius lies in its extensibility. The hard-won ability to seamlessly integrate custom Python components, as demonstrated through our technical deep-dive, eliminates all boundaries. This capability transforms Langflow from a defined framework into an open platform, offering an infinite number of possibilities to tailor solutions, address highly specific use-cases, and build proprietary logic that directly answers your unique business needs. Mastering custom components is the ultimate key to unlocking Langflow’s full potential, ensuring your AI applications are not just powerful, but perfectly customized.

Thanks for reading 😄

Links

Top comments (0)