Outpaint — extend any image beyond its borders
Every team that ships images at scale eventually hits the same wall: the photo you have is the wrong shape for the slot you need. Our new outpaint endpoint extends the canvas in any direction without cropping the subject, without watermarks, and without making you babysit a job queue.
What it does
POST /v1/image/outpaint takes a public image URL and grows the canvas outward. You pick a direction — all, left, right, top, or bottom — and how many pixels to add per extended side, anywhere from 64 to 512. The original pixels are preserved exactly. Only the new region is generated, and it's generated to be coherent with the lighting, perspective, and structure of what was already there.
The whole thing runs synchronously. You make the HTTP call, you wait, you get a finished image back in the response. There's no callback URL to register, no polling loop to write, no job ID to chase across two services. If your code can call a normal REST endpoint, it can use Outpaint — no special client, no SDK lock-in.
A quick rundown of the request fields:
-
image_url— public URL of the source image. Required. -
direction— one ofall,left,right,top,bottom. Defaults toall, which grows the canvas evenly on every side. -
extend_pixels— integer between 64 and 512, applied per extended side. Defaults to 256. -
prompt— optional text guidance for what should appear in the new region. Capped at 500 characters. Leave it blank and the model just continues whatever it sees at the edges; pass a short hint like"soft studio backdrop, warm light"and it'll bias the new pixels toward that.
There's no watermark on the output. The image you get back is yours, ready to drop into a banner, a thumbnail, or a product card.
Why we built it
Image extension sits in an annoying gap. On one side you have classic canvas-and-fill tooling — slap a coloured background behind the subject, hope the seam doesn't show, accept that anything more elaborate needs a designer. On the other side you have heavyweight creative suites that can do beautiful outpainting but want you to upload, click through a wizard, and pay per export. Neither of those options fits a backend pipeline that needs to process thousands of user uploads a day.
We kept seeing the same shape of pain in support threads:
- A marketplace had product photos shot in portrait. Their hero slot was landscape. Cropping the product was not an option.
- A furniture seller had clean cutouts on white but every channel they listed on wanted the item in a believable room.
- A creator tools startup let users upload square photos for a thumbnail builder. YouTube wanted 16:9. Letterboxing looked amateur.
In all three cases the answer was the same: extend the canvas, generate the new region, keep the original untouched. So we built a single endpoint that does exactly that.
The angle we took matters. Outpaint is real edge-aware extension, not canvas-and-fill. The original pixels are preserved exactly — pixel-for-pixel, no resampling, no quality drop on the part of the image you cared about. Only the new region is generated, and it's generated with awareness of what's at the boundary so the seam disappears. Lighting direction carries over. Textures continue. Architectural lines stay straight. A wood grain that runs across the bottom edge keeps running.
We also made a deliberate call on synchronous response. A lot of image generation APIs hand you back a job ID and tell you to either poll or set up a webhook. That's fine for batch workloads, but it's a tax on every interactive use case. If a user uploads an avatar and you need to re-frame it before showing it back to them, you don't want to wire up a queue worker. Outpaint returns the finished image in the same HTTP response. Your code stays linear. Your latency budget stays predictable.
The pipeline is self-hosted on our own infrastructure. That means we control the cost curve and we don't pass per-call surcharges from a third party on to you. It's also why pricing is flat — you're not paying premium rates because some upstream provider is having a busy hour.
Quickstart
The fastest way to see it work is to point it at any public image URL. Here's the exact curl:
curl -X POST https://api.pixelapi.dev/v1/image/outpaint \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"image_url": "https://example.com/source.jpg", "direction": "all", "extend_pixels": 256}'
Replace YOUR_API_KEY with the key from your dashboard and https://example.com/source.jpg with any publicly reachable image. The response comes back in the same connection — no second round-trip needed.
Same call from Python using requests:
import requests
resp = requests.post(
"https://api.pixelapi.dev/v1/image/outpaint",
headers={
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
json={
"image_url": "https://example.com/source.jpg",
"direction": "all",
"extend_pixels": 256,
},
timeout=60,
)
resp.raise_for_status()
result = resp.json()
print(result)
A couple of practical notes from working with the endpoint:
- Start with
direction: "all"andextend_pixels: 256to get a feel for it. That's a clean, square growth on every side and it's enough to turn a tight crop into a comfortable composition. - If you only need to extend in one direction — say, turning a portrait into a landscape — use
direction: "left"plus a separaterightcall, or just callallwith a smallerextend_pixels. Picking a single side is faster than growing the whole canvas. - The
promptfield is genuinely optional. If you don't pass it, the new region is inferred from the boundary alone, which is usually what you want for product shots and photos where the existing scene should just continue. Use the prompt when you want to nudge the model toward something specific — a sky tone, a backdrop colour, a setting. -
extend_pixelsis per side, not total. Setting it to 512 withdirection: "all"adds 512 pixels on the top, bottom, left, and right — so a 1024×1024 input becomes a 2048×2048 output. Plan your dimensions before you call.
Use cases
Convert portrait product shots into landscape banners without cropping the subject. This is the canonical outpainting problem. You shot the product vertically because that's what the photographer's setup gave you, but the homepage hero is 16:9 and the product needs to live inside it. Cropping is off the table — the whole point is for the customer to see the product. With Outpaint you call the endpoint with direction: "left" and again with direction: "right", or one call with direction: "all" if the subject is centred, and the model fills the sides with content that matches the existing background. The product stays exactly as the photographer captured it; the world around it just gets a little bigger. Same workflow drops straight into a Shopify or marketplace pipeline that needs landscape and square variants from the same source.
Add room-context around a furniture cutout for marketplace listings. Cutouts on white look clean but they don't sell. Buyers want to see the chair in a room, the lamp on a side table, the rug under a coffee table. Historically that's a styled photoshoot — expensive, slow, and impossible to redo every time you launch a new colourway. With Outpaint you start from your clean cutout, call the endpoint with a prompt like "warm living room interior, soft daylight", and the model grows a believable room around the piece. The furniture itself is untouched — every detail your photographer captured is still there, pixel for pixel — but now it sits inside a context that helps a buyer visualise it. Run it across a catalogue and you get a roomset library without booking a studio.
Re-frame a square photo into 16:9 for YouTube thumbnails. Creator tools and thumbnail builders constantly deal with users uploading square photos because that's what their phone or their Instagram exported. YouTube wants 16:9 at 1280×720. Letterboxing looks like an upload mistake. Cropping cuts the subject's head off. Outpaint with direction: "left" and direction: "right" extends the photo sideways into a true 16:9 frame with the subject still centred and the background continuing naturally. Wire it into the upload flow and the user never has to know there was a format mismatch — they just get a thumbnail that fills the player.
Pricing
Outpaint costs 19 credits per call. In rupees that's ₹0.013 per call, and in dollars it's $0.00015 per call.
A few things worth being upfront about:
- Pricing is per call, not per output dimension. Whether you extend by 64 pixels or 512, on one side or all four, the cost is the same.
- There's no separate fee for using the optional
promptfield. Guided and unguided generations are billed identically. - There's no watermark to remove for an extra fee, because there's no watermark in the first place.
- Credits are shared across the rest of the PixelAPI catalogue, so the same balance you use for outpainting also covers the other endpoints in your account.
For a sense of scale: a thousand outpaint calls is about ₹13 or $0.15. A hundred thousand calls is about ₹1,300 or $15. That makes it cheap enough to run on every user upload in a consumer product, and cheap enough to backfill a catalogue of tens of thousands of product images without a finance conversation.
Try it
Spin up an API key on the dashboard and start calling the endpoint — the first request takes about as long as reading this paragraph. Full reference, parameter details, and response schema live in the docs. If you build something with it, we'd love to see it.
Top comments (0)