Earlier this week I made a change I had been putting off for two months. I went through every "Start Free Trial" button on the site and rewrote it to read "Start Free Trial — $29/m after 5 messages". Same button, same color, same destination. Just a price tag glued onto the label.
Then I sat back and waited for conversion to crater.
That's what I expected. That is what most CRO people I trust have told me to expect over the years. Disclosing a price near the CTA is supposed to scare buyers off. The conventional move is to keep the trial language clean, get the email, and let the upgrade screen handle the money conversation. I have done it that way for a long time. I run the content side of our little two-founder shop here in Bali. My brother Brandon ships the infra. I write the words and run the experiments. So when I changed every CTA on the site, I was running an experiment on my own copy and I was pretty sure the result was going to make me feel stupid.
It did not make me feel stupid. It made me feel like I had been quietly lying to myself for a year.
Why I made the change at all
The reason I changed the buttons in the first place had nothing to do with conversion theory. It was a support ticket.
A user wrote in last Tuesday and said something like: "I clicked Start Free Trial on the homepage and now I am being charged $29 a month and I never saw a price anywhere." He was not angry. He was confused. He thought he was on a free tier. He had clicked through onboarding too quickly, like everybody does, and the trial expiry email landed in his Promotions tab. By the time the charge hit his card he had no memory of what the price even was. The product was working. The bill was correct. The experience was terrible.
Brandon and I talked about it on a call. Brandon's view was that this was a billing UX problem and he could fix it on the dashboard side. My view was that it was upstream of that. The user did not get blindsided in the dashboard. He got blindsided on the marketing site, where I had spent twelve months carefully not telling him the price near the button he was clicking.
There is a phrase I keep coming back to when I write copy for the SME audience we care about: a point-of-sale system never surprises you. That is the standard I want for our hosted OpenClaw thing. People should know what they are buying when they click. Even if the click is for a free trial. Especially if the click is for a free trial.
So I changed the button. Then I changed it everywhere. Hero, pricing page, footer, the comparison tables, the in-line "Try it free" links inside blog posts. Wherever a user could begin the trial flow, the price for what they were beginning was now sitting right next to the button.
What I expected vs. what happened
The hypothesis I went in with was the textbook one. Higher friction at CTA means lower click rate. Lower click rate means fewer trial signups. Fewer trial signups means fewer paid conversions even if intent quality is higher, because most funnels lose to volume.
The chart in my head looked like a small dip in clicks, an unclear story for trials, and a possibly-positive but possibly-flat story for paid. I told myself this was the kind of thing where you accept a 10-15% conversion hit because the integrity story is worth it. I was bracing for a number that was going to make me defensive on a call with Brandon.
What actually showed up over four days was that click-through on the changed buttons did not drop in any meaningful way. The wobble I saw was within the noise floor of a site that does not get a billion visits a day. Trial signups looked the same. The piece I did not predict was the support volume change. The "I did not realize this was paid" tickets did not just go down. They stopped. Zero in the four days after the change. The week before the change there had been three.
I want to be careful here. Four days is not a study. I do not have enough volume to call this statistically significant. What I am willing to claim is the directional story. The dip I was bracing for did not happen, and the failure mode I was actually paying for did happen to disappear.
Why I think the conventional wisdom missed this one
The CRO playbooks I learned from are almost all built on consumer e-commerce data. A surprise price at the cart on a t-shirt site really does kill conversion, because the buyer has no model for what a t-shirt should cost and the surprise reads as a trick. The price-near-CTA test runs against a buyer who is already loss-averse and already suspicious.
Our buyer is not that buyer. The person clicking "Start Free Trial" on rapidclaw.dev is usually a small business operator who has been quietly evaluating three or four options. He has been told by somebody on a Reddit thread that AI agents cost money and break overnight. He has read the comparison articles. He has a budget in his head before he gets to my page. When my CTA hides the price, I am not being polite. I am being suspicious. I am giving him a reason to think the real number is higher than the budget he had in mind, otherwise why would I be hiding it?
This is the lesson I keep relearning when I write for SMEs instead of for developers. SME operators are not afraid of $29 a month. Most of them are paying $29 a month for fifteen different things they barely use. What they are afraid of is the shape of a trick. The trick-shape is "free trial → unexpected charge". The honest-shape is "free trial, $29/m after". Once they see the honest-shape on the button, the rest of the page reads as honest too. And the honest page outperforms the polished one.
There is a thread connecting this to the content side of what I do. Every time I write a piece that tries to figure out the real number of something — what an AI agent actually costs you to run for a month, what the cheap tier really gets you, where the bills come from — that piece tends to outperform the piece that talks about features. The audience is starving for somebody to do the math out loud. If you are wrestling with this same problem on the infra side, my OpenClaw hosting cost breakdown goes through the self-host vs. managed numbers in the same flat way I am writing here. I keep coming back to it because the buyers who read that piece convert at a higher rate than buyers who read the hero copy. They have already done the spreadsheet, in their heads, with me.
What I did not change, and why
I did not change the upgrade screen. The upgrade screen still reads the same way it has read for a year. I did not change the email sequence. I did not change the dashboard banner. The only change was the marketing site CTAs.
The reason matters. If I had changed everything at once, I would not be able to say anything about the CTA disclosure specifically. I would have a bundle of changes and a bundle of outcomes and a story I could tell either direction. By keeping the change narrow, I have a thin slice of evidence that the disclosure itself was at worst a wash and possibly a small win. That is enough for me to leave it on. It is not enough for me to write a Twitter thread saying "always disclose price near your CTA, you cowards". I am not writing that thread. I am writing this post.
The other thing I did not do is run a real A/B test. I do not have the volume on this site for a clean A/B in four days. I changed the buttons globally, watched the metrics I cared about, and decided on the basis of the things that did not happen — clicks did not drop, signups did not drop, support tickets did drop. I am calling that good enough for a button copy decision that took fifteen minutes to make.
The piece I did not see coming
A thing I noticed, a week in, that I did not predict at all. Trial-to-paid conversion looks like it is creeping up. I am not ready to claim a number on it because the cohort is small and the sample period is short. But it is moving in the right direction and I think I know why. The trial users who came in after the button change are showing up in their first session knowing what they are paying for and when. That sounds obvious, right up until you watch how trial flows usually behave. Most trials get killed by the second-week confusion: the user is not sure if they are using the product correctly, is not sure when the bill comes, has not internalized what the bill even is. A user who saw "$29/m after 5 messages" on the way in does not have that confusion. He sat down at the dashboard with a price already loaded into his head. He is not surprised by anything. He treats the trial like a paid product on day one. So he uses it like a paid product on day one. So he keeps it.
That is the part I want to write a longer follow-up on. Not yet. I want another two weeks of data first.
What I would tell another founder reading this
Three things, none of them clever.
First, the CTA-near-price prohibition is a holdover from a different audience. If you are selling to small business operators with budget in their heads, the conventional move is the wrong move. Price near the button reads as honesty. Hiding the price reads as a trick. The buyer's instinct is not the e-commerce buyer's instinct.
Second, the metric you should look at is not the click rate on the button. It is the support ticket queue and the trial-to-paid rate. The button is a top-of-funnel thing. The win, if there is one, lands later in the funnel. If you only watch the click rate you will miss the entire story.
Third, do the change before you can prove the change. I sat on this for two months because I wanted a clean experimental design and I did not have the volume for one. The actual decision took fifteen minutes once I let go of the test-design fantasy. Sometimes you ship the obvious thing and the world tells you what happened.
I am leaving the buttons how they are. If the trial-to-paid number keeps moving in this direction, I will write that follow-up. If it drifts back to the old number, I will write that one too.
Either way I am not putting the price back behind the button. Brandon and I do not run this thing to be clever. We run it to be a place a small business can park its agent and forget about it. Forgetting starts at the button.
— Tijo




Top comments (0)