We've all been there.
Client sends reference images. No brand guide. No Figma file. Just "the colors are somewhere in these photos."
Your first instinct? Open Figma, grab the eyedropper, start clicking around.
Here's why that approach quietly breaks your color system — and what to use instead.
🔴 The Problem With Manual Eyedropping
When you zoom into an image and eyedrop a single pixel, you're not picking the actual color. You're picking a compressed, interpolated, zoom-blurred approximation of it.
JPEG compression alone shifts pixel values by 5–15%. Combine that with:
Screen rendering differences
Zoom interpolation artifacts
Monitor color profile variations
...and your "exact" brand color is already wrong before you've typed a single line of CSS.
For a hobby project — invisible. For a production Tailwind config feeding a full component library — it ships bugs.
✅ What an Image Extractor Does Differently
A proper image extractor doesn't click one pixel. It samples the entire image systematically.
Here's how a good one works under the hood:
Phase 1 — Grid Sampling
Sample points: 12 × 12 = 144 pixels
Coverage: full image, edge to edge
Every region of the image gets sampled — shadows, highlights, midtones, corners. Nothing gets missed because the algorithm "didn't look there."
Phase 2 — Vibrance Anchor
Sort all 144 samples by HSL saturation
Pick the highest saturation color as anchor
This guarantees your most visually dominant color leads the palette.
Phase 3 — LAB Farthest-Point Sampling
Color space: CIELAB (perceptual)
Strategy: maximize perceptual distance between each selected color
Filter: remove near-duplicates + transparent pixels
LAB color space models how human eyes perceive color difference — not how screens define it mathematically. Two HEX values can look nearly identical to a human eye but be numerically far apart in RGB. LAB filtering catches that.
The result: a palette that actually reflects what's in the image.
🎯 Draggable Pixel Probes — The Real Game Changer
Auto-sampling is great. But the feature that changes your workflow entirely is draggable pixel probes.
Instead of the algorithm deciding which colors to extract, you get 5 markers you drag anywhere on the image. Each one reads the exact pixel underneath it in real time using an offscreen canvas — zero server calls, zero upload lag.
Practical example:
Marker 1 → drag to logo → #1A2E4A (navy)
Marker 2 → drag to background → #F7F3EE (cream)
Marker 3 → drag to CTA button → #E84C3D (red)
Marker 4 → drag to body text → #2C2C2C (near black)
Marker 5 → drag to accent line → #C9A84C (gold)
Five intentional, precise color reads. Not five random colors an algorithm thought looked nice together.
Each probe also shows a coverage percentage — how much of the image that color covers. Immediately tells you what's dominant vs accent without any guesswork.
⚡ Import Methods — All Four
- Drag & Drop → drop file anywhere on the app
- File Picker → JPG, PNG, WebP, SVG, GIF
- Clipboard Paste → Ctrl/Cmd + V (screenshot directly)
- URL Import → paste any image URL The clipboard paste is the one you'll use constantly. Screenshot a section from a client deck → Ctrl+V → drag probes → copy HEX. No file saving, no naming, no folder hunting.
🔁 My Actual Workflow (Copy This)
- Get reference images from client
- Open image extractor
- Paste image via Ctrl+V
- Drag 5 probes to: logo, bg, CTA, text, accent
- Copy each HEX code
- Feed into color scale generator → full Tailwind shades 50–950
- Paste into tailwind.config.js
- Done Before: 2–3 hours of back-and-forth eyedropping + client corrections. After: 15 minutes, first try.
🛠 Tool I Use For This
FreeColorTool — Image Extractor
Free. No login. Runs entirely in browser. Works on mobile too (touch-draggable probes).
💬 Drop Your Color Workflow Below
How do you handle clients who send images instead of brand guides? Any tools or tricks I missed?
Would love to see what others are using — especially if you've found a way to automate this into a CI pipeline or design token generator.

Top comments (0)