Most of us grew up with the same assumption about how people discover products: you Google something, click a few links, compare options, and eventually buy.
That flow is quietly breaking.
More and more, people are asking tools like ChatGPT, Perplexity, or Google Gemini questions directly. Not “best pillow 2026” as a keyword, but actual human questions like:
“What’s the best pillow for neck pain?” or “Why do I wake up with a stiff neck?”
And instead of ten links, they get one answer.
That changes everything.
Because if your product isn’t part of that answer, you’re effectively invisible.
The moment this clicked for us
We work on a very non-tech product: a pillow. Not exactly the kind of thing you’d expect to think about in the context of AI systems.
But when we started looking at how people talk about sleep and neck pain, a pattern became obvious. The questions people ask are incredibly consistent. The wording varies slightly, but the intent is almost always the same: they’re trying to solve discomfort, not shop for a product.
At the same time, AI tools don’t “browse” like users do. They synthesize. They pull from patterns, repeated statements, and sources they’ve seen enough times to trust.
So we realized something slightly uncomfortable:
our product wasn’t the problem—our presence in the information layer was.
From SEO to something slightly different
We still care about SEO, of course. But it doesn’t fully capture what’s happening here.
We started thinking in a slightly different way. Instead of asking, “How do we rank higher?”, we began asking, “If someone asks this question to an AI, how likely is it that our category—or our approach—shows up in the answer?”
Internally, we’ve been calling this GEO (Generative Engine Optimization). It’s not an official term, just a way for us to frame the shift.
What it really means in practice is this:
you’re no longer optimizing for clicks—you’re optimizing to be part of the answer itself.
What actually changed in how we work
One of the first things we did was stop thinking in terms of keywords and start thinking in terms of questions.
Not in an abstract way, but literally writing down the exact questions people ask:
“Are water pillows good for neck pain?”
“What kind of pillow keeps your neck aligned?”
“Why do memory foam pillows stop working after a while?”
Once you see them all together, it’s almost repetitive. Different phrasing, same underlying problem.
From there, the work becomes less about writing “content” and more about building consistent answers. The same idea, expressed clearly, repeated across different places: our own site, articles, community platforms, anywhere people (and AI systems) might encounter it.
We also had to unlearn a bit of traditional content writing. Instead of slowly building up to a point, we now lead with it.
Say the answer first. Then support it.
Because that’s how AI reads.
The weird part: products don’t matter as much as concepts
This was probably the biggest mindset shift.
AI tools don’t really recommend “products” in the way humans think about them. They recommend ideas, categories, approaches.
So instead of pushing a specific product, we spent more time reinforcing the concept behind it. In our case, that’s the idea of a water-based pillow as a solution for neck pain and sleep alignment.
That means being consistent with language, making sure the same idea shows up across multiple sources, and connecting it clearly to the problem it solves.
Once that connection becomes strong enough, the product naturally has a place to exist within it.
Distribution looks very different now
Another thing we underestimated at first: your own website isn’t enough.
AI systems seem to rely heavily on distributed signals—articles, discussions, Q&A threads, anything that reinforces the same idea from different angles.
So we started treating different platforms as part of one system. Longer-form writing to explain the idea in depth. Community-style posts that sound like real user experiences. Direct answers to specific questions.
Not to “promote,” but to make the idea legible in different contexts.
How we know if it’s working (sort of)
This part is still messy.
We do track traffic and sales, but that doesn’t tell the full story. What we’ve been experimenting with instead is very simple: we take a set of common questions and see how often our category or approach shows up in AI-generated answers.
It’s not a perfect metric, but it’s surprisingly revealing.
Some weeks, you start seeing the pattern shift. The same explanation appears more often. The framing becomes more consistent. That’s usually a sign that something is sticking.
Why this feels early—but important
It still feels like we’re at the beginning of this shift. There’s no clear playbook yet, especially for physical products.
But one thing does feel clear: discovery is becoming less about navigating options and more about receiving answers.
And if that’s true, then the goal changes.
You’re no longer just competing to be found.
You’re competing to be understood—and included—when the answer is formed.
I’m curious if anyone else here is working on something similar, especially outside of SaaS. Feels like a very different kind of problem space.
Top comments (0)