DEV Community

Trailguide
Trailguide

Posted on

Your product tour ran. Did anyone finish it?

The first article I wrote about Trailguide was about why tours should live in your repo as JSON files. The engineering case: version control, CI testing, no vendor lock-in.

This one is about what happens after you ship a tour.
The part most teams skip You add an onboarding tour. Users start seeing it. You move on to the next feature. Three months later someone asks "is that tour still working?" and nobody actually knows.
That is the default state for most teams. You built the tour, you have no data on it, and the only signal you get is the absence of complaints.

What you actually want to know is pretty simple: how many people start the tour, how many finish it, and which step is where most of them leave.

What the analytics actually show

Trailguide Pro tracks this without any setup. No events to wire up, no analytics SDK to configure. When you open the dashboard for a trail you see a completion funnel. Each step has a bar showing what percentage of sessions made it that far, color coded by health.

Most teams find the same pattern. One step causes 60 to 70 percent of abandonment. It's usually either too long, asks the user to do something before they're ready, or it fires on an element that loads slowly so the tooltip appears before the user can even read what's on the page.
Once you can see it you fix it. Before that you're guessing.

There's also a chart showing starts versus completions over time, so when you ship a fix you can actually see whether it moved the number. It compares the current period to the prior one. The loop is tight: change something, wait a few days, see if it helped.

Hearing from users directly (this is most important)

The other useful thing is getting feedback at the right moment in the flow.

Trailguide has a feedback step type. You drop it anywhere in a tour, after a tricky step or before the last one, and it shows a short prompt. Rating, freeform comment, or both. The responses go into Product Signals which does automatic sentiment scoring and pulls word themes out of the free text.

You end up with a ranked list of the words users reach for when they're confused or stuck. That's a different signal than a completion rate. One tells you where people leave. The other tells you why.

The full picture

The first article ended with "one file, two jobs." The same JSON that shows users an onboarding tour also runs as a Playwright regression test in CI.

The fuller picture is that you build the tour, users walk through it, CI catches it if something breaks, and the analytics and feedback tell you what to fix next. Then you update the JSON and ship again.

Nothing about that requires a vendor dashboard or a third party owning your content. The tours are files in your repo. The test runs in your CI pipeline. The analytics live in the Pro dashboard but they point back to a problem you fix in code.

That's the thing that took me a while to figure out how to explain. It's not really a tour tool. It's the whole cycle.

Free runtime at github.com/hellotrailguide/trailguide or try the Pro Editor free for 14 days at gettrailguide.com.

Top comments (0)