AUTOMATIC1111's Stable Diffusion WebUI is the most popular interface for running AI image generation locally. It has a built-in REST API that lets you generate, edit, and upscale images programmatically.
Free, open source, runs on your GPU. No API costs, no content restrictions.
Why Use the SD WebUI API?
- Free generation — no per-image costs
- Full control — any model, any LoRA, any settings
- No content policy — generate whatever you want (legally)
- Batch generation — produce hundreds of images programmatically
- All features — txt2img, img2img, inpainting, upscaling, ControlNet
Quick Setup
1. Install
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
# Start with API enabled
./webui.sh --api
# API runs on http://localhost:7860
2. Text to Image
curl -s -X POST http://localhost:7860/sdapi/v1/txt2img \
-H "Content-Type: application/json" \
-d '{
"prompt": "a futuristic data center with glowing servers, cyberpunk style, 4k",
"negative_prompt": "blurry, low quality",
"steps": 30,
"width": 768,
"height": 512,
"cfg_scale": 7,
"sampler_name": "DPM++ 2M Karras"
}' | python3 -c "
import json, sys, base64
data = json.load(sys.stdin)
img = base64.b64decode(data['images'][0])
with open('output.png', 'wb') as f: f.write(img)
print('Saved output.png')
"
3. Image to Image
# Convert image to base64 first
IMG_B64=$(base64 -i input.png)
curl -s -X POST http://localhost:7860/sdapi/v1/img2img \
-H "Content-Type: application/json" \
-d "{
\"init_images\": [\"$IMG_B64\"],
\"prompt\": \"oil painting style, masterpiece\",
\"denoising_strength\": 0.5,
\"steps\": 30
}" | python3 -c "
import json, sys, base64
data = json.load(sys.stdin)
img = base64.b64decode(data['images'][0])
with open('img2img_output.png', 'wb') as f: f.write(img)
print('Saved img2img_output.png')
"
4. Get Available Models
# List models
curl -s http://localhost:7860/sdapi/v1/sd-models | jq '.[].title'
# List samplers
curl -s http://localhost:7860/sdapi/v1/samplers | jq '.[].name'
# List LoRAs
curl -s http://localhost:7860/sdapi/v1/loras | jq '.[].name'
# Current options
curl -s http://localhost:7860/sdapi/v1/options | jq '{model: .sd_model_checkpoint, vae: .sd_vae}'
5. Switch Model
curl -s -X POST http://localhost:7860/sdapi/v1/options \
-H "Content-Type: application/json" \
-d '{"sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors"}'
Python Example
import requests
import base64
from pathlib import Path
SD_URL = "http://localhost:7860"
# Generate image
response = requests.post(f"{SD_URL}/sdapi/v1/txt2img", json={
"prompt": "a robot analyzing data on multiple screens, digital art",
"negative_prompt": "blurry, ugly, deformed",
"steps": 30,
"width": 768,
"height": 512,
"cfg_scale": 7,
"batch_size": 4 # Generate 4 images at once
}).json()
for i, img_b64 in enumerate(response["images"]):
img_data = base64.b64decode(img_b64)
Path(f"output_{i}.png").write_bytes(img_data)
print(f"Saved output_{i}.png")
# Get generation info
info = requests.post(f"{SD_URL}/sdapi/v1/png-info",
json={"image": response["images"][0]}).json()
print(f"Generation info: {info['info'][:100]}...")
Key Endpoints
| Endpoint | Description |
|---|---|
| /sdapi/v1/txt2img | Text to image |
| /sdapi/v1/img2img | Image to image |
| /sdapi/v1/extra-single-image | Upscale |
| /sdapi/v1/sd-models | List models |
| /sdapi/v1/samplers | List samplers |
| /sdapi/v1/loras | List LoRAs |
| /sdapi/v1/options | Get/set options |
| /sdapi/v1/progress | Generation progress |
| /sdapi/v1/interrupt | Cancel generation |
Need custom data extraction or scraping solution? I build production-grade scrapers for any website. Email: Spinov001@gmail.com | My Apify Actors
Top comments (0)