This is a submission for the Built with Google Gemini: Writing Challenge
What I Built with Google Gemini
Default WordPress search returns a list of posts matched by keywords. It doesn’t understand intent, doesn’t answer questions, and feels outdated in a world where users expect conversational AI and full-sentence queries.
I built Geweb AI Search — a WordPress plugin that replaces traditional search with an AI-powered assistant powered by Google Gemini.
Instead of a list of links, users get:
- a direct AI-generated answer
- source links to the exact pages used to generate that answer
- optional conversation history for follow-up questions
The plugin intercepts the standard WordPress search form and opens a modal with two modes:
-
Autocomplete — instant suggestions powered by
WP_Query - AI answer — a full Gemini response with source attribution
At the core of this architecture is Gemini File Search Store.
Why File Search Store instead of classic RAG?
A typical LLM-powered site search usually requires:
- Converting content into vector embeddings
- Storing them in a vector database (Pinecone, pgvector, Weaviate, etc.)
- Running similarity search on each query
- Retrieving relevant chunks
- Passing them into the LLM as context
That’s a full RAG stack — infrastructure-heavy and operationally complex.
With Gemini File Search Store, the workflow becomes much simpler:
- Convert WordPress posts to Markdown
- Upload documents to a Store via the Gemini API
- When a query arrives, instruct Gemini to use that Store
Indexing, retrieval, and answer generation are handled internally by Gemini.
No separate vector database.
No embedding pipeline.
No retrieval orchestration layer.
This approach works especially well for:
- corporate websites
- documentation portals
- e-commerce catalogs
- content-heavy blogs
Because the model operates strictly within the uploaded content boundaries, responses stay grounded in the site’s data instead of relying on general model knowledge.
Demo
Live demo You can test both autocomplete and AI answers right away.
The plugin is available in the official WordPress directory.
What I Learned
Gemini 3 vs 2.5 — what changed and what matters for integration
During integration, I discovered important behavioral differences between Gemini 2.5 and Gemini 3 when working with File Search Store.
In my testing:
- Gemini 2.5 supported structured JSON responses reliably only in non-Store mode.
- When connected to a File Search Store, responses were returned as plain text.
- Gemini 3 improved this by supporting Store-based search together with structured outputs and source attribution.
This directly impacts implementation decisions.
If your UI depends on structured responses (for example, rendering answers programmatically or attaching metadata), the model version matters. The plugin supports both generations, but full structured attribution works correctly with Gemini 3.
Practical takeaway: always test the exact model + Store combination in staging before shipping to production.
Secure API key storage
The Gemini API key is stored encrypted in the WordPress database using libsodium.
Implementation principles:
- The key is encrypted before being saved to the database.
- Only users with proper WordPress capabilities can modify it.
- Decryption happens only at runtime when making API requests.
- No external dependencies are required.
Store opens up new use cases
While building this plugin, I realized that File Search Store is not just about search.
It enables an entire class of AI-driven features built on isolated, trusted content:
- AI-powered site search
- virtual assistants for corporate websites
- product advisors for online stores
- FAQ bots grounded in real documentation
All based on the same principle: upload structured content into a Store and let the model operate strictly within those boundaries.
This provides predictable, content-scoped answers without building a full custom RAG stack.
Google Gemini Feedback
What worked well
The File Search Store concept is the biggest win.
There’s no need to:
- spin up a vector database
- build an embeddings pipeline
- manage similarity search
- maintain retrieval logic
You upload documents, and Gemini handles indexing and retrieval internally.
From a developer’s perspective, this significantly reduces infrastructure complexity and time-to-market.
What caused friction
The API is still in beta (v1beta), and that shows.
1. Multiple base URLs
File uploads go through:
https://generativelanguage.googleapis.com/upload/v1beta
While other operations use:
https://generativelanguage.googleapis.com/v1beta
Working with the same logical entities across different base URLs is confusing.
2. Different upload flows return different structures
There are two ways to upload files:
- Upload a file first, then attach it to a Store
- Upload directly into a Store
These approaches return different response formats.
If you upload separately and then attach, the path starts with: files/....
But when uploading directly into a Store, you get: documents/....
For File Search Store queries, the documents/... format is required.
This behavior is not immediately obvious and required trial-and-error testing.
3. Stability under load
During peak hours, the API occasionally returns delays or intermittent errors.
In production, this requires:
- proper error handling
- retry logic with exponential backoff
What I'd like to see
- Unified base URL for all API operations
- Consistent response structures across upload flows
- Clear documentation with end-to-end File Search Store examples
- Store-level management operations (e.g., delete Store with all documents in one request)
- Long-term: S3-compatible storage interface for easier integration
- A stable non-beta release
Overall, Google Gemini — and especially File Search Store — significantly lowers the barrier to building AI-powered site search.
For WordPress developers, this makes advanced AI search achievable without maintaining a full RAG infrastructure.



Top comments (2)
Great writeup! Nice work shipping this to the official WordPress plugin directory!
I'm working on a related problem from the opposite direction, instead of bringing AI search into WordPress, I'm making WordPress content visible to external AI engines (ChatGPT, Claude, Perplexity) via structured llms.txt files. Interesting how both approaches converge on the same core idea: AI needs clean, structured content to work well.
Hi Made,
thanks!
Absolutely — everyone loves structured content. Both humans and AI benefit from it.