DEV Community

Cover image for Not Just API Calls: How Flowork Runs Local AI (Stable Diffusion & Training) On Your Server
Tarno Pon
Tarno Pon

Posted on

Not Just API Calls: How Flowork Runs Local AI (Stable Diffusion & Training) On Your Server

In the gold rush of AI automation, a clear pattern has emerged. Tools like Zapier, Make, and n8n are racing to integrate "AI," but for 99% of them, this translates to one thing: making an API call.

You want to summarize text? Your data is sent to OpenAI.
You want to generate an image? Your prompt is sent to Midjourney or DALL-E.

This is automation, yes, but it's built on a foundation of trade-offs. You're trading:

  • Cost: You pay per token, per call, per image. As you scale, so does your bill.
  • Privacy: Your sensitive company data, customer information, and proprietary prompts are continuously streamed to third-party servers. You are operating on trust.
  • Control: You're locked into another company's model, ethics, and availability. If their API is down, so is your "AI-powered" workflow.

But what if there was a different way? What if "AI automation" didn't mean "wrapping an API"? What if it meant running the models yourself, on your own hardware, inside your own network?

I've been digging into a new platform, Flowork, and the architecture I've found (based on its core files) isn't just a minor improvement—it's a fundamental paradigm shift. Flowork is designed from the ground up to be a local-first AI orchestrator.

Today, we're going to dive into the code and configs to prove it.


Part 1: The Architectural Proof (It Runs on Your Server)

Before we can even talk about local AI, we need to establish where the engine lives. This is the core difference. Flowork employs a hybrid architecture: a cloud-based GUI for convenience, but a powerful, containerized engine that runs entirely on your local server.

This isn't an assumption; it's in the docker-compose.yml file.

# FILE: docker-compose.yml

services:
  flowork_gateway:
    image: flowork/gateway:latest
    container_name: flowork_gateway
    # ... (ports and network)

  flowork_core:
    image: flowork/core:latest
    container_name: flowork_core
    # ... (volumes and env)

  flowork_cloudflared:
    image: cloudflare/cloudflared:latest
    container_name: flowork_cloudflared
    # ... (command and token)
Enter fullscreen mode Exit fullscreen mode

This tells us everything. The system is split into three main services:

  1. flowork_core: This is the brain. The "Async Orchestrator" that executes workflows. It lives 100% on your machine.
  2. flowork_gateway: This is the receptionist. It handles secure communication from the outside world (the GUI) and passes instructions to the flowork_core.
  3. flowork_cloudflared: This is the secure tunnel that connects the GUI (at https://flowork.cloud) to your local gateway without you needing to open firewall ports.

Because the flowork_core is local, it has direct access to your local hardware (RAM, CPU, and, critically, GPUs) and your local network. This architecture is the foundation that makes local AI not just possible, but practical.

Part 2: Local Generation — Meet the Stable Diffusion XL Module

This is where it gets exciting. Zapier can call DALL-E. Flowork can become an image generator.

Don't take my word for it. Let's look at the module's manifest file.

# FILE: C:\FLOWORK\modules\stable_diffusion_xl\manifest.json

{
  "id": "stable_diffusion_xl",
  "name": "Stable Diffusion XL",
  "version": "1.0.0",
  "description": "Create images from text prompts using the local Stable Diffusion XL model.",
  "author": "Flowork",
  "license": "Flowork",
  "type": "module",
  "category": "AI",
  "icon": "icon.png",
  "tags": ["AI", "Image Generation", "Stable Diffusion", "SDXL", "Local"]
}
Enter fullscreen mode Exit fullscreen mode

The key phrase is right in the description: "...using the local Stable Diffusion XL model."

This isn't a wrapper. It's not sending your prompt to an external API. It's a full-blown, locally-run module. And the proof is in its dependencies. When you install this module, what does Flowork fetch?

# FILE: C:\FLOWORK\modules\stable_diffusion_xl\requirements.txt

accelerate
torch
diffusers
transformers
Pillow
Enter fullscreen mode Exit fullscreen mode

This is the "smoking gun." You don't see requests or openai. You see torch, diffusers, and accelerate—the core Python libraries for actually running diffusion models.

Why This Changes Everything:

  • Zero Cost: Generate 10 or 10,000 images. Your cost is $0 (plus the electricity).
  • Total Privacy: Your prompts, your generated images, and your iterative ideas never leave your server.
  • Infinite Customization: Because it's local, you can (in theory) point it to your own fine-tuned SDXL models, LoRAs, and checkpoints that live on your local drive.

This is the power of a local-first architecture. It moves the computation from "their server" to "your server."

Part 3: The Holy Grail — AITrainingService

Okay, local image generation is impressive. But what about training? Fine-tuning a model on your own data is the true holy grail for creating a customized AI. This is something cloud-only platforms wouldn't dream of offering.

Flowork is built for it.

The platform is designed as a collection of microservices, all defined in one central file. And look what we find inside.

# FILE: C:\FLOWORK\services\services.json

[
  {
    "service_name": "DatabaseService",
    "description": "Manages all database connections and operations.",
    "core_service": true
  },
  {
    "service_name": "ModuleManagerService",
    "description": "Manages the lifecycle of modules (installation, isolation, execution).",
    "core_service": true
  },
  {
    "service_name": "AITrainingService",
    "description": "Handles local AI model training and fine-tuning using user-provided datasets.",
    "core_service": false
  },
  {
    "service_name": "CloudSyncService",
    "description": "(English Hardcode) End-to-End Encrypted (E2EE) backup and restore service.",
    "core_service": false
  }
  // ... and many more services
]
Enter fullscreen mode Exit fullscreen mode

Did you spot it? Right there, listed as a core part of the system's capabilities:

AITrainingService: "Handles local AI model training and fine-tuning using user-provided datasets."

This is not a trivial feature. This is a statement of intent. Flowork is being positioned as a platform where you don't just use AI; you create it.

This service implies you can point Flowork at a dataset (e.g., all your company's internal documents, your support tickets, or a folder of your product photos) and run a fine-tuning job—again, locally. The resulting model is yours, trained on your data, and it never left your network.

This is a level of data sovereignty and power that API-based tools simply cannot offer.

Part 4: The Secret Sauce: How It Doesn't Explode (.venv Isolation)

If you're a developer, you might be thinking: "This sounds like a nightmare. How do you manage dependencies?"

The Stable Diffusion XL module needs torch and diffusers, which are massive. What if another module needs an older version of torch? Or a simple process_trigger module just needs psutil?

This is the final, brilliant piece of the architecture. Let's look back at the services.json file, at the ModuleManagerService:

ModuleManagerService: "Manages the lifecycle of modules (installation, isolation, execution)."

The key word is isolation. From analyzing the service's job (and its counterpart, PluginManagerService), it's clear that Flowork does not install all module dependencies into one giant, global environment.

Instead, each module and plugin lives in its own sandboxed Python Virtual Environment (.venv).

When you install the Stable Diffusion XL module, the ModuleManagerService creates a dedicated .venv for it and installs torch, diffusers, etc., only inside that environment. When you install the process_trigger module, it gets its own tiny .venv with just psutil inside.

This is the "secret sauce" that makes the local AI strategy feasible. It solves "dependency hell" before it even starts.

  • Stability: Modules can't conflict with each other.
  • Security: Modules are sandboxed.
  • Scalability: You can add complex, heavy-duty AI modules without breaking your simple utility modules.

Conclusion: Stop Calling AI. Start Running It.

Flowork represents a fundamental shift in what an automation platform can be. It's moving beyond the "If-This-Then-That" (IFTTT) model of API calls and into the realm of true, local orchestration.

By running the core engine on your server, Flowork unlocks three benefits that cloud-only tools will never be able to match:

  1. True Data Sovereignty: Your data stays with you, whether you're generating images or fine-tuning models on proprietary documents.
  2. Zero-Cost Scaling: Your only cost is the hardware you already own. Run one job or one million; your bill is the same.
  3. Ultimate Power & Control: You're not limited by an API's rate limits, pricing, or feature set. You are running the code.

The automation wars are heating up, but Flowork isn't just bringing a knife to a gunfight. It's bringing a fully-equipped, local, AI-powered factory.

It's time to stop just calling AI. It's time to start running it.

website : https://flowork.cloud/
document : https://docs.flowork.cloud/
github : https://github.com/flowork-dev/Visual-AI-Workflow-Automation-Platform

Top comments (0)