DEV Community

Cover image for How to Install FLUX.1-Kontext-Dev Locally?
Ayush kumar for NodeShift

Posted on • Originally published at nodeshift.com

How to Install FLUX.1-Kontext-Dev Locally?

Image description

FLUX.1 Kontext [dev] is a powerful visual editing model designed to change and transform existing images based on natural instructions. Whether it’s adding new elements like a hat to a dog or adjusting the style of a scene, this model understands the context and applies the edit with impressive consistency — all without needing additional fine-tuning.

Built by Black Forest Labs, FLUX.1 Kontext is equipped to handle complex transformations while preserving the original image’s integrity. What makes it truly stand out is its ability to perform multiple edits in a row with minimal drift, allowing creators, designers, and developers to iterate smoothly.

This release — the [dev] version — is open to the research and builder community under a non-commercial license, with high-quality weights and native support in tools like Diffusers and ComfyUI.

If you’re looking to build the next wave of creative tools, this model gives you a serious head start.

Resources

Link: https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev

GPU Configuration Table for FLUX.1 Kontext [dev]

Image description

Best Practical Setup (Tested)

  • GPU: RTX A6000 or A100 40GB
  • RAM: At least 45–60 GB
  • vCPUs: Minimum 24
  • Storage: 80–100 GB SSD
  • Environment:
  • Python 3.11
  • CUDA 12.1+
  • PyTorch with torch_dtype=torch.bfloat16
  • Safety filter runs on CPU to save VRAM

Step-by-Step Process to Install FLUX.1-Kontext-Dev Locally

For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.

Step 1: Sign Up and Set Up a NodeShift Cloud Account

Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.

Follow the account setup process and provide the necessary details and information.
Image description

Step 2: Create a GPU Node (Virtual Machine)

GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Image description

Image descriptionNavigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy

Step 3: Select a Model, Region, and Storage

In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
Image description

Image descriptionWe will use 1 x RTXA6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.

Step 4: Select Authentication Method

There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Image description

Step 5: Choose an Image

In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running FLUX.1-Kontext-Dev, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.

We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04

This image is essential because it includes:

  • Full CUDA toolkit (including nvcc)
  • Proper support for building and running GPU-based applications like DeepTeam
  • Compatibility with CUDA 12.1.1 required by certain model operations

Launch Mode

We selected:
Interactive shell server

This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching models like FLUX.1-Kontext-Dev.

Docker Repository Authentication

We left all fields empty here.

Since the Docker image is publicly available on Docker Hub, no login credentials are required.

Identification

Template Name:
nvidia/cuda:12.1.1-devel-ubuntu22.04

CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
Image description

This setup ensures that the FLUX.1-Kontext-Dev runs in a GPU-enabled environment with proper CUDA access and high compute performance.
Image description

After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Image description

Step 6: Virtual Machine Successfully Deployed

You will get visual confirmation that your node is up and running.
Image description

Step 7: Connect to GPUs using SSH

NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.

Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Image description

Image descriptionNow open your terminal and paste the proxy SSH IP or direct SSH IP.
Image description

Next, If you want to check the GPU details, run the command below:

nvidia-smi

Enter fullscreen mode Exit fullscreen mode

Image description

Step 8: Install Miniconda & Packages

After completing the steps above, install Miniconda.

Miniconda is a free minimal installer for conda. It allows the management and installation of Python packages.

Anaconda has over 1,500 pre-installed packages, making it a comprehensive solution for data science projects. On the other hand, Miniconda allows you to install only the packages you need, reducing unnecessary clutter in your environment.

We highly recommend installing Python using Miniconda. Miniconda comes with Python and a small number of essential packages. Additional packages can be installed using the package management systems Mamba or Conda.

For Linux/macOS:

Download the Miniconda installer script:

sudo update && apt install wget -y
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh

Enter fullscreen mode Exit fullscreen mode

For Windows:

  • Download the Windows Miniconda installer from the official website.
  • Run the installer and follow the installation prompts. Image description

Run the installer script:
bash Miniconda3-latest-Linux-x86_64.sh

Image description

After Installing Miniconda, you will see the following message:

Thank you for installing Miniconda 3! This means Miniconda is installed in your working directory or on your operating system.
Image description

Step 9: Activate Conda and Create a Environment

After the installation process, activate Conda using the following command:

source ~/.bashrc
Enter fullscreen mode Exit fullscreen mode

Create a Conda Environment using the following command:

conda create -n flux python=3.11 -y
conda activate flux

Enter fullscreen mode Exit fullscreen mode
  • conda create: This is the command to create a new environment.
  • -n flux: The -n flag specifies the name of the environment you want to create. Here flux is the name of the environment you’re creating. You can name it anything you like.
  • python=3.11: This specifies the version of Python that you want to install in the new environment. In this case, it’s Python 3.11.
  • -y: This flag automatically answers “yes” to all prompts during the creation process, so the environment is created without asking for further confirmation. Image description

Image description

Step 10: Upgrade pip and Install Required Python Packages

Run the following commands to upgrade pip and install required python packages:

pip install --upgrade pip
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
pip install git+https://github.com/huggingface/diffusers.git
pip install transformers accelerate safetensors numpy pillow

Enter fullscreen mode Exit fullscreen mode

Image description

Step 11: Install HuggingFace Hub

Run the following command to install huggingface_hub:
pip install huggingface_hub

Image description

Step 12: Login to Hugging Face

Run the following command to use the CLI to authenticate:
huggingface-cli login

This will ask for your Hugging Face token.

You can generate your token here:
https://huggingface.co/settings/tokens

Use a read access token, copy it, and paste it in the terminal prompt.
Image description

Image description

Step 13: Install Sentencepiece & Protobuf

Run the following command to install sentencepiece & protobuf:
pip install sentencepiece protobuf

Image description

Step 14: Connect to your GPU VM using Remote SSH

  • Open VS Code on your Mac.
  • Press Cmd + Shift + P, then choose Remote-SSH: Connect to Host.
  • Select your configured host.
  • Once connected, you’ll see SSH: 38.29.145.28(Your VM IP) in the bottom-left status bar (like in the image). Image description

Step 15: Run the FLUX.1 Kontext Model Script

  • Inside VS Code (connected via SSH), make sure your script file (e.g., run_flux.py) is saved.
  • Paste the following code into your run_flux.py file inside VS Code:
# save as run_flux.py
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image

pipe = FluxKontextPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev", 
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
edited_image = pipe(
    image=input_image,
    prompt="Add a hat to the cat",
    guidance_scale=2.5
).images[0]

edited_image.save("output.png")

Enter fullscreen mode Exit fullscreen mode

Image description

Then, Run it with:
python run_flux.py

Image description

Step 16: Preview and Verify Your Edited Image

  • Once your script finishes running, check the generated file output.png in your file explorer on VS Code.
  • Just double-click on output.png to preview it within the VS Code image viewer.
  • In the example here, the model successfully added a hat to the cat, as requested in the prompt. You’ve now verified that the image editing pipeline using FLUX.1 Kontext [dev] is working correctly! Image description

Image description

Part 2: Set Up a Gradio Web Interface for FLUX.1 Kontext-Dev

After verifying your model works via script, let’s now run the same functionality through a simple drag-and-drop web interface using Gradio.

Step 18: Clone the FLUX Safety Filter Repository

This gives us access to the PixtralContentFilter, which checks generated images for policy violations.

Run the following commands to clone the flux repository:

git clone https://github.com/black-forest-labs/flux
cd flux
pip install -e .

Enter fullscreen mode Exit fullscreen mode

Once installed, you’ll be able to import and use PixtralContentFilter inside your Python app.
Image description

Step 19: Install Gradio

Gradio will give us a GUI to upload images, type prompts, and preview the edited result — no script edits required each time.

Run the following command to install the gradio:
pip install gradio

This installs the latest version of Gradio and its required dependencies.
Image description

Step 20: Create the Gradio App File on Your VM

Now that Gradio and the FLUX safety filter are installed, let’s build the app that launches the visual interface.

Create the Gradio Python File

In VS Code (already connected to your GPU VM via Remote SSH):

  • Right-click inside the file explorer (left sidebar)
  • Click on “New File”
  • Name it: flux_gradio_app.py Open the file and paste the full code below into it:
import gradio as gr
import torch
import numpy as np
from diffusers import FluxKontextPipeline
from transformers import logging
from flux.content_filters import PixtralContentFilter
from PIL import Image
import os

# Prevent CUDA memory fragmentation
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True"

logging.set_verbosity_error()  # Suppress tokenizer warnings

# Load FLUX.1-Kontext-dev model on GPU
pipe = FluxKontextPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    torch_dtype=torch.bfloat16
)
pipe.to("cuda")

# Load safety filter on CPU to save GPU memory
integrity_checker = PixtralContentFilter(torch.device("cpu"))

# Main image editing function
def edit_image(input_image, instruction, guidance_scale):
    edited = pipe(
        image=input_image,
        prompt=instruction,
        guidance_scale=guidance_scale
    ).images[0]

    # Safety filter check
    image_np = np.array(edited) / 255.0
    image_tensor = 2 * image_np - 1
    image_tensor = torch.from_numpy(image_tensor).to("cpu", dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2)

    if integrity_checker.test_image(image_tensor):
        return None, "⚠️ Image was flagged by safety filter. Please try another prompt."

    edited.save("output.png")  # Save output image
    return edited, "✅ Successfully generated and saved as output.png"

# Gradio Web UI
demo = gr.Interface(
    fn=edit_image,
    inputs=[
        gr.Image(type="pil", label="Input Image"),
        gr.Textbox(label="Edit Instruction", placeholder="e.g., Add a wizard hat to the cat"),
        gr.Slider(1.0, 5.0, value=2.5, step=0.1, label="Guidance Scale"),
    ],
    outputs=[
        gr.Image(label="Edited Output"),
        gr.Textbox(label="Status")
    ],
    title="FLUX.1 Kontext [dev] – Image Editing by Instruction",
    description="Upload an image and describe your change. Example: 'Make it look rainy' or 'Add fire in the background'."
)

demo.launch(server_name="0.0.0.0", server_port=7860)

Enter fullscreen mode Exit fullscreen mode

Image description

Step 21: Securely Access the FLUX.1 Web UI

After launching your Gradio app inside the GPU VM with:
python3 flux_gradio_app.py

You’ll see:
Running on local URL: http://0.0.0.0:7860

This means your server is ready, but we need to securely tunnel the port to your Mac so you can open it in your local browser.
Image description

Step 22: Connect using SSH Port Forwarding (Mac Terminal)

From your Mac, run the following command:
ssh -N -L 7860:localhost:7860 -p 40847 root@38.29.145.18

Explanation:

  • 7860:localhost:7860 maps remote port 7860 to your local machine
  • -p 40847 is your custom SSH port (update this if needed)
  • root@38.29.145.18 is your VM IP and user Image description

Step 23: Open the App in Your Browser

Once the SSH tunnel is active, open your browser and go to:
http://localhost:7860

You’ll now see the Gradio app interface (as shown in your screenshot) with:

  • Upload section for the input image
  • Textbox for edit instruction
  • Slider for guidance scale
  • Live output preview Image description

Image description

Image description

Image description

Conclusion

FLUX.1 Kontext [dev] is more than just another model — it’s a practical tool for anyone who wants to transform images based on human-like instructions. With the ability to make consistent edits, retain original context, and run efficiently on modern GPUs, it’s perfect for both builders and researchers.

In this guide, we showed you how to deploy it on a GPU Virtual Machine, run your first edit using Python, and launch a fully working Gradio interface you can access securely from your browser. Whether you’re editing visuals for fun, prototyping creative tools, or testing image workflows — this model gives you an ideal starting point.

Now that everything is running on your own terms, feel free to explore more edits, try chained transformations, or even integrate it into your own apps.

Your canvas is ready — go create.

Top comments (0)