Spoiler: It Was Messy, Impressive, and Nothing Like What the Hype Promised
I'll be honest with you. I went into this experiment half-expecting to be disappointed.
I've spent nearly a decade building things slowly — ebooks drafted over weeks, client deliverables refined across multiple revision rounds, content strategies mapped out in Google Docs at odd hours of the night when everyone else is asleep. I'm not someone who romanticizes shortcuts. I've seen too many people chase "fast" and end up with garbage dressed in a nice font.
But the noise around AI and speed had been getting louder, and at some point, I decided I needed to stop forming opinions from the sidelines and actually test the thing properly. Not a casual prompt here and there. A full build. One project. One hour. Real stakes.
So I set a timer, opened my tools, and got to work.
Here's everything that happened.
The Project: A Lead Magnet Ebook, Start to Finish
I didn't pick something easy. A lead magnet ebook sounds simple on the surface, but if you've built one before — especially one meant to actually convert, not just exist — you know it requires a specific kind of thinking. You need a hook that makes sense for the audience. A structure that teaches without overwhelming. Copy that doesn't sound like it was pulled from a corporate template. A title that earns the click. And enough substance that people feel they got something real, not filler dressed up as value.
The topic I chose: building a ghostwriting portfolio when you have no clients yet.
It's something I know intimately. I've lived it. Which meant I could tell immediately when the AI was being useful versus when it was being confidently useless — and that distinction turned out to be important.
I gave myself one rule: no switching tools mid-session. Whatever I started with, I finished with. No rabbit holes. No "let me just check this other thing." The clock was running, and the point was to simulate the kind of pressure a real project operates under.
The First Fifteen Minutes: Surprisingly Good
I started with the outline. I gave the AI a detailed prompt — target reader, goal of the ebook, tone I wanted, what I did not want it to sound like — and what came back was, genuinely, not bad.
It wasn't brilliant. It wasn't the outline I would have built from scratch if I were sitting down with my own ideas and a blank page. But it was structured, it had a logical progression, and it gave me something to react to, which is honestly more than half the battle when you're staring at an empty document at 11 p.m.
I've worked with enough writers and clients to know that the hardest part of any project isn't the execution — it's getting past the blankness. The AI solved that. Immediately. The outline was on the screen in under two minutes, and instead of spending fifteen minutes staring into space, I was already editing, restructuring, and adding my own logic on top of a working skeleton.
That alone was worth something.
By the twelve-minute mark, I had a solid five-chapter outline with section breakdowns. Chapter titles that actually had personality. A clear arc from problem to solution to action. I was ahead of where I usually am at this stage.
Minutes Fifteen to Thirty-Five: Where It Got Complicated
Then I started asking for actual content, and this is where the experience split in two.
The sections I asked the AI to write first — the introductory context, the general "why ghostwriting matters" framing — came out polished but hollow. Smooth sentences. Competent structure. Zero soul. The kind of writing that technically says the right things but doesn't make you feel anything while reading it. I've read thousands of articles and ebooks in my career, and I can spot the difference between writing that came from lived experience and writing that was assembled from patterns. This was assembled.
I rewrote those sections myself. Completely. Which honestly felt right — they were the sections closest to my own story, the ones where my voice had to be present or the whole thing would ring false.
But then something interesting happened.
I asked the AI to write the tactical sections — the step-by-step breakdown of how to create portfolio samples when you have no client work, the list of platforms new ghostwriters often overlook, the script for reaching out to potential clients for free work in exchange for a testimonial. And those sections? They were actually good. Not just passable. Usable. Specific. Structured in a way that a beginner would find genuinely helpful.
The AI, I started to understand, is excellent at information delivery and genuinely limited at perspective. Give it a framework to fill in with facts and steps, and it performs well. Ask it to have a point of view, and it gives you something that feels like a point of view without actually having one.
That's not a small distinction. That's everything, if you care about voice.
Minutes Thirty-Five to Fifty: The Real Work
By this point, I'd stopped thinking of the AI as a writer and started treating it like a very fast research assistant who had read too many mediocre blog posts.
I asked it to generate the example portfolio prompts — ten writing scenarios a new ghostwriter could use to create sample work from scratch. It delivered twenty. I used six, rewrote four from scratch because the framing was off, and discarded the rest.
I asked it to write the closing call-to-action. It gave me something so generic I actually laughed. "Ready to start your ghostwriting journey? Take the first step today!" I replaced it with three lines I wrote in under two minutes that actually sounded like something a human being would say.
I asked it to suggest a title and subtitle. It gave me eleven options. Two of them were interesting. One of them was exactly right with minor adjustments. That felt like a reasonable ratio.
The back half of the hour was me doing the real editorial work — reading everything that had been produced, making decisions about what stayed and what got cut, injecting specificity where there was vagueness, adding examples from my own experience where the AI had used hypothetical placeholders, and generally treating the draft the way I'd treat any rough first pass from a junior writer. With patience, but without sentiment.
At the One-Hour Mark: What I Actually Had
When the timer went off, I had a 2,400-word ebook draft that was, in my honest assessment, about 65% ready.
Not 100%. Not even close to what I would have produced if I'd spent a full day on it. But 65% in one hour is not something I can dismiss. That's a real number. That's a draft with bones and meat on it, not just an outline and good intentions.
The structure was solid. The tactical sections were genuinely useful. The voice was inconsistent — swinging between my actual tone and something blander in the sections I hadn't yet touched — but that was fixable. The title was strong. The opening chapter still needed rewriting from the ground up.
But it existed. Fully. In an hour. When my usual process for something this size would take, at minimum, three focused sessions over two days.
What This Actually Means for How I Work
I've been sitting with this experiment for a few weeks now, and here's where I've landed.
AI didn't replace my skill. It compressed my start time, which is a different and more interesting thing. The expertise I've accumulated over a decade — knowing what a good ebook structure looks like, knowing when copy is earning its place versus taking up space, knowing what a beginner ghostwriter actually needs to hear versus what sounds helpful but isn't — none of that became irrelevant. If anything, it became more necessary. Because without it, I wouldn't have known what to keep, what to throw out, and what to rewrite.
The people who are going to be undone by AI tools are not the skilled ones. They're the ones who were already producing mediocre work and charging for it on the basis that it took them a long time. AI doesn't punish craft. It punishes low-effort execution that was previously protected by the friction of time.
For me, the hour experiment confirmed something I'd suspected but hadn't fully tested: AI is most powerful in the hands of people who already know what good looks like. Because then you're not just generating — you're curating, editing, elevating. And that process, when done well, is indistinguishable in output from something built entirely from scratch.
It just took a fraction of the time.
The Honest Limitations I Won't Pretend Away
Because I refuse to write the version of this essay that makes AI sound like a miracle with no downsides, here's what the experiment also showed me.
Voice is not replicable. Not yet. Every section the AI wrote that I kept required adjustment — sometimes small, sometimes significant — to sound like something I would actually say. If you have a strong, distinctive voice that your audience knows and trusts, you will always need to be the last person in the room with the draft. The AI can bring the furniture. You have to make it feel like home.
Specificity requires input. The AI could only be as specific as the information I gave it. When I gave it a detailed brief, the output was useful. When I was lazy with the prompt, the output was lazy in return. This is obvious in theory and genuinely annoying in practice, because writing a good prompt takes effort, and when you're tired and just want something done, that effort is easy to skip — and the results show it immediately.
It doesn't know what it doesn't know. This is the one that concerns me most, especially for people who don't have the background to catch errors. The AI made several claims in the early draft that were technically not wrong but were contextually misleading — advice that works in theory but doesn't account for real-world messiness. I caught them. Someone newer to the industry might not have. The tool does not flag its own limitations. It simply continues, confidently.
So Would I Do It Again?
Yes. Already have.
But I do it now with a clearer understanding of what the tool is for. It's for compression. It's for momentum. It's for getting past the blankness fast so the real creative work can start sooner. It is not for replacing judgment. It is not for producing work you haven't earned the ability to evaluate.
The hour experiment didn't make me faster. It made me more efficient with the time I was already spending. And in a business where output is currency, that matters.
I just know better than to outsource the part that makes the output worth anything in the first place.
That part still lives in my head, built from nearly ten years of doing this. And no timer I set is going to change that.
Top comments (0)