Last week a client asked me to flag AI-generated profile photos during user signup. Sounds straightforward until you realize there's no built-in browser API for "is this image fake?" and training your own classifier means months of work and a GPU budget.
I ended up solving it with an HTTP call. Here's how.
The problem is bigger than you think
AI image generators have gotten absurdly good. Midjourney v6, DALL-E 3, Stable Diffusion XL — the outputs fool most humans. That creates real problems:
- Marketplaces where sellers use AI product photos that misrepresent items
- Dating apps flooded with generated profile pictures
- News platforms where AI images get passed off as photojournalism
- KYC flows where submitted ID photos might be synthetic
Manual review doesn't scale. You need something automated.
What detection actually looks for
Before we write code, it helps to understand what AI image detectors analyze. There are roughly 12 signals that separate generated images from real ones:
- Frequency domain artifacts — GANs leave spectral fingerprints invisible to the eye but obvious in Fourier space
- Inconsistent lighting direction — shadows that don't agree with each other
- Texture repetition patterns — diffusion models sometimes tile textures unnaturally
- Edge coherence — boundaries between objects can look too smooth or too sharp
- Metadata analysis — EXIF data either missing entirely or suspiciously generic
- Compression artifacts — generated images show different JPEG artifact patterns than camera photos
- Color distribution — histogram shapes that differ from natural photography
- Noise patterns — sensor noise in real photos follows predictable distributions; AI images don't
- Facial geometry — subtle asymmetries, iris irregularities, teeth artifacts
- Background consistency — warped architecture, impossible geometry in the background
- Fine detail analysis — text, fingers, hair strands where generators still struggle
- GAN fingerprinting — identifying which specific model generated an image
Running all 12 checks yourself would be a significant ML engineering project. Or you can call an API.
Setting up the detection endpoint
I'm using the AI Deepfake Detector API which runs as an always-on Standby actor. That means sub-second response times — no cold starts, no waiting for containers to spin up.
The base URL:
https://george-the-developer--ai-deepfake-detector.apify.actor
Node.js implementation
Here's a minimal working example. Install nothing — just fetch:
async function detectAIImage(imageUrl) {
const response = await fetch(
'https://george-the-developer--ai-deepfake-detector.apify.actor',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_APIFY_TOKEN'
},
body: JSON.stringify({
imageUrl: imageUrl
})
}
);
if (!response.ok) {
throw new Error(`Detection failed: ${response.status}`);
}
return response.json();
}
// Usage
const result = await detectAIImage('https://example.com/suspicious-photo.jpg');
console.log(`AI Generated: ${result.isAiGenerated}`);
console.log(`Confidence: ${result.confidence}%`);
console.log(`Signals detected:`, result.signals);
The response gives you a confidence score and which specific signals triggered. That's important because you probably don't want to auto-reject at 51% confidence.
Adding it to an Express upload endpoint
Here's a more realistic example — checking images during file upload:
import express from 'express';
import multer from 'multer';
const app = express();
const upload = multer({ dest: 'uploads/' });
const DEEPFAKE_API = 'https://george-the-developer--ai-deepfake-detector.apify.actor';
const APIFY_TOKEN = process.env.APIFY_TOKEN;
app.post('/api/upload-photo', upload.single('photo'), async (req, res) => {
// First, upload the image to your CDN/storage and get a public URL
const publicUrl = await uploadToStorage(req.file);
// Then check if it's AI-generated
const detection = await fetch(DEEPFAKE_API, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${APIFY_TOKEN}`
},
body: JSON.stringify({ imageUrl: publicUrl })
}).then(r => r.json());
if (detection.isAiGenerated && detection.confidence > 80) {
return res.status(422).json({
error: 'This image appears to be AI-generated',
confidence: detection.confidence,
signals: detection.signals
});
}
// Image passed — continue with normal flow
res.json({ success: true, imageUrl: publicUrl });
});
Setting your confidence threshold
Don't blindly reject everything above 50%. Think about your use case:
| Use Case | Suggested Threshold | Why |
|---|---|---|
| KYC / Identity verification | 60% | Better safe — flag for human review |
| Marketplace product photos | 80% | Avoid false positives on edited photos |
| Social media profiles | 75% | Balance between safety and usability |
| News / journalism | 70% | Flag for editor review |
The signals array in the response helps too. If ganFingerprint and frequencyArtifacts both trigger, that's much more conclusive than metadataAnalysis alone.
Error handling you'll actually need
async function safeDetect(imageUrl, timeoutMs = 5000) {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
try {
const response = await fetch(DEEPFAKE_API, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${APIFY_TOKEN}`
},
body: JSON.stringify({ imageUrl }),
signal: controller.signal
});
if (!response.ok) throw new Error(`API error: ${response.status}`);
return await response.json();
} catch (err) {
if (err.name === 'AbortError') {
console.warn('Detection timed out — allowing image through');
return { isAiGenerated: false, confidence: 0, timedOut: true };
}
throw err;
} finally {
clearTimeout(timeout);
}
}
Always have a fallback. If the detection API is slow or down, you probably want to allow the upload and flag it for async review rather than blocking the user.
Try it yourself
You can test the API directly on Apify: AI Deepfake Detector
It costs $0.003 per image — roughly $3 for every 1,000 checks. For most apps, that's negligible compared to the cost of fake content getting through.
Wrapping up
Adding AI image detection to your app takes about 20 lines of code. The hard part — the ML models, the 12-signal analysis, the infrastructure — is someone else's problem. You just make an HTTP call and decide what to do with the result.
If you're building anything where image authenticity matters, bolt this on now. The longer you wait, the better the generators get.
Top comments (0)