AI color grading — cinematic LUTs and mood presets via one API call
Color is the difference between a product photo that converts and one that doesn't, between a hero image that feels premium and one that feels like a stock thumbnail. Most teams either pay a colorist, fight Lightroom presets that don't quite match, or ship inconsistent imagery and hope nobody notices. We built color-grade so that "make this look like a brand asset" is a single HTTP call.
What it does
POST /v1/image/color-grade takes any public image URL and returns the same image with a coherent color treatment baked in. You pick a preset — cinematic, vintage, warm, cool, brand, or custom — and a grading strength between 0 and 1, and we handle the curves, the channel mixing, the highlight rolloff and the shadow tinting that normally lives behind a colorist's panel of sliders.
The three request fields are intentionally small:
-
image_url— public URL of the source image (required). -
preset— one ofcinematic,vintage,warm,cool,brand,custom(required). -
intensity— a float from0.0to1.0controlling how aggressive the grade is. Defaults to0.7, which is what we found most teams reach for.
The output is a fully graded image you can drop straight into a CDN, a product page, a social card, or a CMS. There is no "preview" / "render" two-step. You send one request, you get one image back, and you pay 6 credits for it.
The custom preset is the part most people care about once they're past the demo stage. It accepts your own LUT or palette so you can encode a brand book — the exact teal-and-amber your design team agreed on six months ago — into a reusable preset and stop hand-grading every catalogue refresh.
Why we built it
If you've ever tried to keep imagery consistent across a real product, you already know the failure mode. Photographers shoot in slightly different lighting. UGC comes in from forty different phones. Marketing pulls a hero asset from a Drive folder that hasn't been touched since last quarter. Each image, individually, is fine. Together they look like four different brands stapled into one storefront.
The existing options for fixing this are all bad in different ways:
- Manual color work. A colorist or a designer in Lightroom is precise but doesn't scale. Five hundred SKUs is a week of clicking. Five thousand is a hire.
- Generic Instagram-style filters. They scale fine, but they're tone-deaf to the source image. A "warm" filter over a product shot that's already warm just blows it out.
- Roll-your-own pipeline. Pillow, OpenCV, a stack of curve adjustments, a junior engineer learning what "lift gamma gain" means on the job. Six weeks later you have a service that mostly works on your test set and falls over on edge cases.
Our angle: a purpose-built grading model behind a single endpoint, with a small, opinionated set of presets that cover the looks people actually ship, and a custom escape hatch for teams with a real brand spec. No subscription tier for "advanced curves," no rendering queue, no client-side WebGL hacks. One POST, one image back.
The differentiator we care about most is the price. At 6 credits per call, color grading a 1,000-image catalogue is a rounding error on your invoice, which is the only way this kind of feature actually gets used in production rather than reserved for hero assets.
Quickstart
Grab an API key from the dashboard, then hit the endpoint directly:
curl -X POST https://api.pixelapi.dev/v1/image/color-grade \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"image_url": "https://example.com/source.jpg", "preset": "cinematic", "intensity": 0.7}'
That's the entire surface area. No multipart upload, no signed URL dance, no client SDK to install before you can see a result.
The Python equivalent using requests:
import requests
API_KEY = "YOUR_API_KEY"
ENDPOINT = "https://api.pixelapi.dev/v1/image/color-grade"
payload = {
"image_url": "https://example.com/source.jpg",
"preset": "cinematic",
"intensity": 0.7,
}
response = requests.post(
ENDPOINT,
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json=payload,
timeout=60,
)
response.raise_for_status()
result = response.json()
print(result)
A few practical notes from how teams have been integrating it:
- Keep
intensityaround0.6–0.75for product imagery. The default of0.7is usually right; below0.5you stop seeing the grade, above0.85you start crushing skin tones on people shots. - For batch work, parallelise at the HTTP level. Each call is independent, so a thread pool of 8–16 workers will saturate most catalogue jobs without any extra plumbing.
- If you're calling this from a web app, do it server-side. The API key shouldn't ship to the browser, and you usually want to write the result to your own storage anyway rather than hot-linking the response URL forever.
Use cases
Normalise a 1,000-product catalogue to a single brand palette
This is the one we hear about most often. An e-commerce team has SKUs shot over two years by three different photographers, each with their own white balance habits. The site looks fine if you only see one product at a time, but the category grid is a mess of warm-leaning leather goods next to cool-leaning ones next to whatever the iPhone shots ended up looking like. With the brand preset — or custom if you've supplied your own LUT — you point a script at your image bucket, fire one call per asset, and write the graded versions back. A 1,000-product backfill is around 6,000 credits and finishes faster than the meeting where someone proposes hiring a retoucher. From then on, the grade lives in your image pipeline: every new upload goes through the same call before it hits the CDN, and the catalogue stays visually coherent without anyone thinking about it.
Apply a vintage tone-curve to user uploads in a photo app
If you're building any kind of consumer photo product — a journal app, a social network, a print-on-demand service — you've probably looked at adding "filters" and decided it was a six-month project nobody wanted to own. The vintage preset gives you that feature in an afternoon. User uploads an image, your backend forwards it to color-grade with preset: "vintage" and an intensity you let the user nudge with a single slider, and you get back a treated image to display or save. Because each call is 6 credits, you can offer the feature on a free tier without it eating your margins, and the intensity knob means the same preset feels different at 0.3 than at 0.9, which keeps the UI from feeling one-note even with a small preset list.
Pre-grade frames before social-media export
Social teams live and die by visual consistency across a feed, and the awkward truth is that most "social-ready" assets are just whatever the design team had time to grade last week. Wire color-grade into your export step — when a designer or a marketer publishes an asset to the social pipeline, the export script runs it through cinematic or warm (whatever your feed's voice is) at a fixed intensity before it hits Buffer / Hootsuite / your own scheduler. Now every post in the feed shares a grade, even when the source images come from completely different shoots. The team stops re-grading the same image three times for three platforms, and the feed actually looks like one brand instead of a Pinterest board.
Pricing
Straightforward, no tiers, no per-feature gating:
- Credits per call: 6
- INR price: ₹0.004 per call
- USD price: $0.00005 per call
That's the whole table. The same price applies whether you're calling cinematic on a single hero image or running custom across a five-thousand-product catalogue. Sub-10 credits per call is what makes this actually usable for catalogue-scale work — you don't have to ration calls or build a tier system around "premium" assets, you just grade everything and move on.
A worked example for the catalogue case: 1,000 images at 6 credits each is 6,000 credits, which at ₹0.004 per call comes to ₹4 for the entire backfill. At $0.00005 per call, the same 1,000-image run is $0.05. That is genuinely the right order of magnitude for a feature you want to leave on by default rather than reserve for special occasions.
If you're integrating this into a free-tier consumer product, the math also works the other direction: a user who grades 20 photos in a session costs you 120 credits, which is small enough that you don't need to put a paywall in front of the feature to keep your unit economics sane.
Credits roll up across all PixelAPI endpoints, so if you're already using other tools on the platform, color-grade slots into the same credit pool — no separate billing, no separate dashboard, no separate key.
Try it
The fastest path: grab a key, paste the curl above, swap in a real image URL, and look at the result.
- Dashboard / get an API key: https://pixelapi.dev/dashboard
- Docs: https://pixelapi.dev/docs
A reasonable first 30 minutes with the API:
- Run the curl on a single product shot you already have. Note where it lands at
intensity: 0.7. - Re-run with
cinematic,vintage,warm,coolin turn, same intensity, same image. This is the cheapest way to figure out which preset matches your brand voice without reading marketing copy about each one. - Pick the preset that fits, then sweep
intensityfrom0.3to0.9in steps of0.2. You'll feel the right number for your imagery within five calls. - If none of the built-in presets match, switch to
customand wire in your LUT. This is the path most teams end up on once they're past evaluation, because a real brand spec rarely matches a generic preset perfectly. - Drop the call into your upload pipeline, your export step, or your catalogue backfill script. That's the integration.
If you hit anything weird — an image that grades in a way you didn't expect, a preset that feels wrong on a specific kind of source, an intensity value that misbehaves — tell us. The preset list is small on purpose, but it's also not frozen, and "this preset doesn't cover the look I'm trying to ship" is exactly the feedback that drives what we add next.
Ship the grade. Stop hand-curving every asset.
Top comments (0)