DEV Community

Lightning Developer
Lightning Developer

Posted on

Build AI Agents Locally and Access Them Anywhere with Langflow

There was a time when building AI agents meant stitching together multiple libraries, writing glue code, and debugging small mismatches between APIs. That approach still exists, but tools like Langflow have made the process far more approachable. Instead of writing everything from scratch, you now work on a visual canvas where components connect like building blocks.

What makes this even more interesting is that you can run the entire system on your own machine. Your workflows, your data, and your experiments stay local. But that also introduces a small limitation. By default, your setup is only accessible on your own network. If you want to test your agent on your phone, share it with someone else, or integrate it with an external service, you need a way to expose it.

This guide walks through that journey in a practical and grounded way. You will install Langflow, run it locally, and then make it accessible from anywhere using a simple tunneling approach.


Understanding Langflow in Practice

Langflow is a visual environment for building AI pipelines. Instead of writing long scripts, you connect components such as language models, vector databases, APIs, and logic blocks on a canvas.

Each workflow you create becomes more than just a visual diagram. It turns into a working API endpoint. That means your flow is not only interactive in the UI but also usable in real applications.

What stands out is flexibility. You are not tied to one provider. You can switch between different models, use local inference, or connect external tools depending on your needs.

Why Running It Locally Makes Sense

Running Langflow on your own system gives you control that cloud setups cannot always offer.

Privacy is the most obvious benefit. If you are working with sensitive documents or internal datasets, keeping everything on your machine avoids unnecessary exposure.

Cost is another factor. When paired with local models, you can experiment freely without worrying about usage-based billing.

There is also a level of customisation that comes with self-hosting. You decide how your environment is configured, what services it connects to, and how data is stored.

Getting Started with Installation

You can install Langflow in multiple ways depending on how comfortable you are with Python environments.

Option 1: Using uv

pip install uv
uv pip install langflow
uv run langflow run
Enter fullscreen mode Exit fullscreen mode

This method is clean and efficient, especially if you want isolated environments.

Option 2: Using pip

pip install langflow
langflow run
Enter fullscreen mode Exit fullscreen mode

This works well if you already manage Python environments manually.

Option 3: Using Docker

docker run -p 7860:7860 langflowai/langflow:latest
Enter fullscreen mode Exit fullscreen mode

This approach avoids local dependency management entirely. Everything runs inside a container.

Once started, open:

http://localhost:7860
Enter fullscreen mode Exit fullscreen mode

You should see the Langflow interface ready to use.

Running Langflow on a Different Port

If port 7860 is already occupied, you can change it easily:

langflow run --port 8080
Enter fullscreen mode Exit fullscreen mode

Or set it as an environment variable:

export LANGFLOW_PORT=8080
langflow run
Enter fullscreen mode Exit fullscreen mode

A More Stable Setup with Docker Compose

For longer-term usage, especially if you want persistence, Docker Compose with a database is a better choice.

Create a docker-compose.yml file:

services:
  langflow:
    image: langflowai/langflow:latest
    pull_policy: always
    ports:
      - "7860:7860"
    depends_on:
      - postgres
    env_file:
      - .env
    environment:
      - LANGFLOW_DATABASE_URL=postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      - LANGFLOW_CONFIG_DIR=/app/langflow
    volumes:
      - langflow-data:/app/langflow

  postgres:
    image: postgres:16
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - langflow-postgres:/var/lib/postgresql/data

volumes:
  langflow-postgres:
  langflow-data:
Enter fullscreen mode Exit fullscreen mode

Create a .env file:

POSTGRES_USER=langflow
POSTGRES_PASSWORD=changeme
POSTGRES_DB=langflow
LANGFLOW_SUPERUSER=admin
LANGFLOW_SUPERUSER_PASSWORD=changeme
LANGFLOW_AUTO_LOGIN=False
Enter fullscreen mode Exit fullscreen mode

Start everything:

docker compose up -d
Enter fullscreen mode Exit fullscreen mode

This setup ensures your flows and configurations persist across restarts.

Making Your Local Setup Accessible

Once Langflow is running, it is still limited to your local machine. To access it remotely, you need to create a tunnel.

This is where Pinggy becomes useful.

Create a Public URL

Run this command:

ssh -p 443 -R0:localhost:7860 free.pinggy.io
Enter fullscreen mode Exit fullscreen mode

After running it, you will receive a public URL that maps to your local server.

You can now open that link from any device and access your Langflow instance.

Adding Basic Protection

If you are sharing access, it is a good idea to add a simple authentication layer:

ssh -p 443 -R0:localhost:7860 -t free.pinggy.io b:username:password
Enter fullscreen mode Exit fullscreen mode

This ensures that only users with the credentials can access your setup.

Building Your First Flow

Once everything is running, the real value comes from building workflows.

A simple starting point is a question answering agent that fetches information from the web.

Basic components include:

  • Chat Input for user queries
  • Search tool for fetching information
  • Parser to convert structured data into text
  • Prompt Template to combine inputs
  • Language model to generate responses
  • Chat Output to display results

You connect these components visually. Each connection represents how data flows from one step to another.

Exploring a RAG Workflow

One of the most practical use cases is Retrieval Augmented Generation.

In simple terms, you allow your agent to answer questions based on your own documents.

The flow usually looks like this:

  • Upload a document
  • Split it into smaller chunks
  • Convert those chunks into embeddings
  • Store them in a vector database
  • Retrieve relevant pieces during a query
  • Combine them with the question
  • Generate a final answer

This approach makes your agent far more useful for domain-specific tasks.

Running Everything Locally with Ollama

If you want complete control, you can avoid external APIs entirely.

Start by running a local model:

ollama pull llama3.2
ollama serve
Enter fullscreen mode Exit fullscreen mode

Then connect Langflow to:

http://localhost:11434
Enter fullscreen mode Exit fullscreen mode

Now your entire pipeline runs on your own machine, from document processing to response generation.

Using Your Flow as an API

Every workflow you build can be called programmatically.

Example:

curl -X POST \
  "http://localhost:7860/api/v1/run/<your-flow-id>" \
  -H "Content-Type: application/json" \
  -d '{"input_value": "What does the document say about pricing?"}'
Enter fullscreen mode Exit fullscreen mode

If you replace localhost with your public tunnel URL, you can call your agent from anywhere.

Flexibility Across Tools and Models

Langflow supports a wide range of integrations. You can experiment with different models, connect various databases, and integrate external services without changing your entire setup.

This flexibility makes it useful not just for experimentation, but also for building real applications.

Conclusion

Self-hosting Langflow changes how you approach building AI systems. Instead of relying entirely on external platforms, you gain ownership over your workflows and data.

Adding remote access completes the picture. It allows your local setup to behave like a deployable service without the overhead of managing servers.

The combination of a visual builder, local control, and simple remote access creates a workflow that feels both powerful and practical. It lowers the barrier to experimentation while still giving you the tools needed for more serious projects.

Reference

How to Self-Host Langflow and Access It Remotely

Top comments (0)