I wanted to understand how Imagga's visual search compares to ours, so I signed up, got API keys, and tested both APIs against the same images. The two products turned out to be more different than I expected.
TL;DR: Imagga's "visual search" is built on image categorization and tag matching. Vecstore uses vector embeddings for actual visual similarity. Vecstore is about 8x faster on search (300ms vs 2.5s), doesn't require a separate database, supports text-to-image search, and auto-indexes without manual training. Imagga is stronger at structured image tagging, color extraction, and background removal.
How the Two Approaches Differ
The biggest takeaway from testing wasn't speed. It was that the two APIs use fundamentally different approaches to image search.
Imagga categorizes images into tags using WordNet taxonomy and then finds other images that share similar tags. When you search for a dog photo, it first categorizes it as border_collie.n.01 with 93.4% confidence, then finds other images that were categorized similarly. It's a categorization-first approach.
Vecstore converts images into vector embeddings that represent visual meaning, then finds the closest matches in vector space. It doesn't categorize or tag anything. It compares what images actually look like.
Both are valid approaches. They solve different problems.
Imagga's Own Demo Needs a Second Database
This was the most interesting finding. I opened the network tab on Imagga's visual search demo and watched what happens when you search.
Two requests fire:
Request 1 goes to Imagga's API and returns categories + image IDs:
// Imagga search response
{
"categories": [{
"name": "border_collie.n.01",
"confidence": 93.41
}],
"images": [{
"id": "img_1770651039261-q6ozvx531",
"distance": 0.387
}]
}
No image URLs. No metadata. Just IDs and distances.
Request 2 goes to a Supabase database to resolve those IDs:
// Second request to Supabase
GET /rest/v1/visual_search_images
?select=save_id,image_url,file_name
&save_id=in.(img_1770651039261-q6ozvx531,img_1770651039253-ggmgwfihy,...)
Their own demo needs a separate Postgres database just to display search results. That's not a limitation of the demo, it's how the API works. Imagga returns IDs, you resolve them yourself.
On Vecstore, one request returns everything:
// Vecstore search response
{
"vector_id": "abc123",
"score": 0.94,
"metadata": {
"image_url": "https://...",
"name": "Border Collie",
"category": "pets",
"price": 45.00
}
}
Image URLs, custom metadata, similarity scores. No second database.
Architecture Comparison
Here's what each setup looks like in practice:
Imagga:
Your app → Imagga API (returns IDs) → Your database (resolve URLs) → Your storage (serve images)
Vecstore:
Your app → Vecstore API (returns full results with URLs) → Your storage (serve images)
With Imagga you're maintaining a separate database that maps image IDs to URLs and metadata. With Vecstore that mapping lives inside the search itself.
Speed Comparison
I benchmarked every operation with real API calls.
| Operation | Imagga | Vecstore |
|---|---|---|
| Insert image | 3.8 - 4.7s | ~200ms |
| Train index | ~0.9s (manual step) | Not needed |
| Search query | 2.0 - 2.5s | ~300ms |
| Insert to searchable | ~5-6s + manual retrain | ~200ms (instant) |
Vecstore's 300ms breaks down to about 90ms for embedding generation and 5-8ms for the actual vector search. The rest is network overhead.
Imagga's 2-2.5 seconds covers the image categorization and tag matching. That doesn't include the second call to your own database to resolve image URLs.
The WordNet Labels
Imagga uses WordNet taxonomy for its category names. In practice that looks like this:
border_collie.n.01loggerhead.n.02turbine.n.01Persian_cat.n.01
The .n.01 suffix means "noun, first sense." This is standard in computational linguistics but not something most developers want to parse or display to users. You'd need a mapping layer to convert these to human-readable labels.
Vecstore doesn't return category labels at all. It returns similarity scores and whatever metadata you stored when inserting the image. You control the naming.
Index Training
With Imagga, after you insert images into an index, you need to manually call a train endpoint before those images become searchable. When you add new images later, you retrain again.
Here's what the insert-train-search flow looks like:
# Step 1: Insert images
curl -X POST "https://api.imagga.com/v2/images-similarity/index/save-image" \
-u "api_key:api_secret" \
-F "image_url=https://example.com/dog.jpg" \
-F "index_name=my_index" \
-F "save_id=img_001"
# Step 2: Manually train the index (required before search works)
curl -X POST "https://api.imagga.com/v2/images-similarity/index/train" \
-u "api_key:api_secret" \
-F "index_name=my_index"
# Step 3: Now you can search
curl -X GET "https://api.imagga.com/v2/images-similarity/index/search" \
-u "api_key:api_secret" \
-d "image_url=https://example.com/query.jpg" \
-d "index_name=my_index"
For apps where users upload images regularly (marketplaces, social platforms, photo libraries), this means either retraining after every upload or batching retrains and accepting that new images won't be searchable immediately.
On Vecstore, insert and search:
# Insert (instantly searchable)
curl -X POST "https://api.vecstore.app/api/databases/{id}/documents" \
-H "X-API-Key: your_key" \
-H "Content-Type: application/json" \
-d '{"image_url": "https://example.com/dog.jpg"}'
# Search
curl -X POST "https://api.vecstore.app/api/databases/{id}/search" \
-H "X-API-Key: your_key" \
-H "Content-Type: application/json" \
-d '{"image_url": "https://example.com/query.jpg", "top_k": 10}'
No training step. Images are searchable the moment you insert them.
Text-to-Image Search
Imagga's visual search is image-to-image only. You search by uploading a photo.
Vecstore supports both. You can type "wind turbine on a hillside" and get matching images back. This works because the model understands both text and images in the same embedding space.
If your users need to search by description (e-commerce, stock photography, content discovery), this matters.
No Metadata Filtering on Imagga
Since Imagga returns only IDs, there's no way to filter results at search time. You can't say "find similar images where price is under $50" or "find similar images in the shoes category."
With Vecstore, metadata is stored alongside the vectors and returned with results. Filtering by metadata fields is part of the search query itself.
For e-commerce and marketplace use cases where you need to combine visual similarity with business logic, this is a meaningful difference.
Pricing
The pricing models are different. Imagga charges monthly subscriptions where unused requests expire at the end of each billing cycle. Visual search requires the $79/mo Indie plan or higher.
Vecstore sells credit packs that never expire. All features are included on every plan.
| Volume | Imagga | Vecstore |
|---|---|---|
| 5K operations | $79/mo | $8 |
| 70K operations | $79/mo | ~$66 |
| 300K operations | $349/mo | ~$240 |
At lower volumes, Vecstore is significantly cheaper. At higher volumes the gap narrows. The main structural difference is that Imagga locks features behind higher tiers (face recognition needs the $349/mo Pro plan) while Vecstore includes everything on all plans.
What Imagga Does Well
Imagga's core strength is image tagging and categorization. If you need to auto-generate structured labels for images, extract dominant colors, remove backgrounds, or read barcodes, Imagga has mature, well-documented tools for that. Their categorization engine is solid and they've been doing this for a long time.
Their visual search is built on top of that categorization engine, which makes sense for their product. It's a different approach than embedding-based search, with its own strengths: structured categories, confidence scores for each label, and a well-defined taxonomy.
When to Use Which
Imagga is a better fit when you need:
- Structured image tagging and categorization
- Color extraction and analysis
- Background removal
- Barcode and text recognition in images
Vecstore is a better fit when you need:
- Visual similarity search (find images that look alike)
- Text-to-image search (describe what you want, get matching images)
- Face search across an image library
- Content moderation (NSFW detection)
- Instant indexing without manual training steps
- Search results with image URLs and metadata in one call
If your use case is primarily search, the differences in speed, developer experience, and architecture add up. Vecstore was built for search from the ground up. Imagga was built for image understanding, and search is one application of that.


Top comments (0)