Body
/lovai is a command that structures your AI session into five blocks and posts it to Lovai. It works with Claude Code, Cursor, Codex, and Gemini CLI.
Every day, the decisions you make and the problems you hit during AI sessions vanish the moment you close the terminal. I kept losing the reasoning behind good sessions -- even when the final output looked fine, the "why I chose this over three alternatives" was gone by the next morning. I built this skill to solve that problem. This article covers how it was designed and implemented -- why this structure, where things broke, and what trade-offs I made along the way.
The Full Architecture
Here's the pipeline, end to end:
User runs /lovai
|
v
Step 1: Session analysis (extract 5 blocks from conversation context)
|
v
Step 2: Security filtering (detect and strip secrets)
|
v
Step 3: Block composition (auto-assign visibility levels)
|
v
Step 4: Metadata tagging (tool name, model, category)
|
v
Step 5: Preview display -> user confirmation
|
v
Step 6: Post to Lovai via API (draft or publish)
Claude Code skills are defined as markdown files under ~/.claude/skills/. They're not code -- they're closer to behavioral instruction documents. That's precisely why design decisions matter so much here. Ambiguous instructions lead to inconsistent AI output.
I'll be honest -- initially I wasn't sure a skill could handle the full pipeline from session analysis to API posting. But since Claude Code's Bash tool can run curl, the entire flow completes inside the skill with zero external dependencies. That realization is what unlocked the architecture.
The Skill Definition -- 5-Block Structure
The core of the skill is deciding what to extract from a session. This is where I spent the most time iterating.
Here's the actual instruction in the skill's markdown definition:
### Step 1: Analyze Session
Extract from the current conversation context:
1. **Core Insight**: Most important outcome/decision (1-2 sentences)
2. **Why**: Why this approach was chosen
3. **Gotchas**: Unexpected problems and how they were solved
4. **Code Details**: Key code changes, configs, commands
5. **Learnings**: Tips for next time
These five extraction targets map to Lovai's block sections:
Block 1: Core Insight -> section: "insight"
Block 2: Approach -> section: "why"
Block 3: Gotchas -> section: "how"
Block 4: Code Details -> section: "detail"
Block 5: Learnings -> section: "tips"
Why Exactly Five
I started with three blocks (overview, details, takeaway). The problem was that "why I chose this approach" and "where things broke" got tangled together in "details," making the output hard to parse.
When I pushed it to seven or eight blocks, the output became inconsistent. AI tends to force-fill every block, even when there isn't enough substance. You end up with padding.
Five blocks hit the sweet spot -- enough granularity to reconstruct the decision-making process, not so many that the AI starts hallucinating content. Separating Why and Gotchas into independent blocks was the key decision. Finished code is reproducible by anyone. But "why I rejected the alternatives" and "where I got stuck unexpectedly" -- only the person who was there can write those.
The Section Attribute Mistake
Lovai posts use a section attribute to label each block's semantic role:
{
"blockType": "text",
"privacy": "public",
"section": "why",
"content": "..."
}
The valid values are insight, why, how, tips, and detail. They correspond to Lovai's post structure labels on the UI.
Here's where I tripped up: I initially assumed section was a free-form string and set it to things like "core-insight". Lovai's API accepted the request without error, but the post UI silently dropped the block labels. No error, no warning -- just missing labels. It took me an embarrassingly long time to figure out. The fix was reading the API spec properly, which I should have done from the start.
Security Filtering -- The Two-Layer Defense
Session logs almost certainly contain API keys, tokens, and credentials. Posting those publicly would be a disaster. Security filtering was the first thing I designed, not an afterthought.
The Actual Secret Detection Patterns
The defense is two layers deep. First, the skill's markdown instructions tell the AI explicitly: "Never include .env contents. Strip API keys and tokens." But relying solely on AI instructions felt insufficient for something this critical.
So the second layer runs server-side when Lovai's API receives the post. Here are the actual regex patterns:
const SECRET_PATTERNS: RegExp[] = [
/sk-lovai-[\w-]+/g,
/sk-proj-[\w-]+/g, // OpenAI
/sk-ant-[\w-]+/g, // Anthropic
/sk_live_[\w]+/g, // Stripe
/sk-[\w]{20,}/g, // Generic keys
/ghp_[\w]+/g, // GitHub PAT
/AKIA[\w]+/g, // AWS
/eyJ[\w-]+\.eyJ[\w-]+\.[\w-]+/g, // JWT
/[\w_]*(SECRET|KEY|TOKEN|PASSWORD)\s*=\s*\S+/gi, // .env format
];
Anything matched gets replaced with [REDACTED] before storage:
// Before
const apiKey = "sk-proj-abc123def456ghi789";
const dbUrl = "postgres://user:password@localhost:5432/mydb";
// After
const apiKey = "[REDACTED]";
const dbUrl = "[REDACTED]";
Absolute file paths also get converted to relative paths (src/lib/...). Absolute paths leak usernames and directory structures -- a subtle but real exposure.
Where I Got It Wrong
I'll be candid about a mistake. The SECRET|KEY|TOKEN|PASSWORD pattern in the regex was too aggressive. It caught NEXT_PUBLIC_ environment variables -- values that are intentionally public and safe to share. On the flip side, custom-prefixed secrets like MYAPP_SECRET_KEY with unusual patterns could still slip through.
The current approach combines pattern matching for candidate detection with the AI's contextual understanding for final judgment. The skill instruction says "never include .env file contents," and the AI uses conversation context to assess whether something is actually sensitive. Pattern matching alone has precision limits, so there's an intentional reliance on the AI's comprehension layer too.
It's not perfect. But the worst-case scenario -- accidentally posting a production API key -- is substantially harder to hit.
Multi-Tool Support: Unified Analysis Over Tool-Specific Parsers
Claude Code, Cursor, Codex, Gemini CLI -- each structures session context differently.
| Tool | Session Format | Skill Approach |
|---|---|---|
| Claude Code | Conversation context passed directly to skill | Direct analysis |
| Cursor | In-editor conversation log | Read as context |
| Codex | CLI-based session | Analyze from conversation context |
| Gemini CLI | Unique dialog format | Analyze from conversation context |
The design question was: build dedicated parsers for each tool, or unify on a generic analysis approach?
I went with unified analysis. Two reasons.
First, skills are markdown-based instructions, not code. The more complex the branching logic, the more the AI's interpretation drifts. Natural language instructions don't handle conditional complexity well.
Second, the essential structure of a session is the same regardless of tool. "What were you trying to accomplish?" "What did you try?" "What didn't work?" "What did you learn?" These four elements don't change between Claude Code and Cursor.
That said, a client field in config.json lets users specify their tool. This is for metadata tagging on the post, not for branching the analysis logic:
{
"apiKey": "sk-lovai-xxxxx",
"endpoint": "https://lovai.app/api/posts/external-create",
"defaultLanguage": "en",
"client": "claude-code"
}
API Integration: Why curl, Why No SDK
The skill posts to Lovai via curl. I considered building an SDK (npm package), but dropped the idea. The reason: the skill's execution environment is Bash. Claude Code skills are markdown instructions -- there's no way to import Node.js modules. curl runs directly from the Bash tool. Zero dependencies, zero maintenance burden.
Setup
Create a Lovai account. Generate an API key from the settings page. Paste the setup command into your CLI. That's it.
The API key is stored at ~/.claude/skills/lovai/config.json. Since ~/.claude/ is user-local, it never gets committed to a repository.
The API Call
curl -s -X POST "$ENDPOINT" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "LP funnel redesign from AI brainstorming session",
"primaryLanguage": "en",
"blocks": [
{
"blockType": "text",
"privacy": "public",
"section": "insight",
"content": "Testing 3 LP funnels revealed that problem-solution-pricing outperforms feature-list-pricing..."
},
...
],
"tools": ["claude-code"],
"models": ["claude-sonnet-4"],
"category": "marketing",
"purpose": "session-capture",
"publish": false
}'
publish: false is the default. Posts are always created as drafts. You review and edit on Lovai before publishing.
I debated whether to default to publish or draft. One-command publishing is convenient, but session logs are close to raw data. The security filter runs, but I can't guarantee 100% accuracy. Defaulting to draft was the safe-side call.
Beyond Lovai
The structured output doesn't have to stay on Lovai. Use it as the foundation for a Dev.to article, a blog post, or just keep it as a personal work log. The value is in capturing the session while context is fresh -- where it ends up is your call.
Visibility Auto-Assignment
Lovai posts support per-block visibility: public (anyone), premium (paid access via Stripe Connect), and private (you only).
The skill auto-assigns based on these criteria:
- public: Conceptual explanations, high-level reasoning, gotcha overviews
- premium: Reusable code snippets, config files, concrete data and templates
- private: Not set by the skill (users can change this on Lovai)
This comes from my own experience as a content buyer. "Why did you choose this approach?" -- I want that for free. "Here's the exact implementation" -- that's worth paying for.
A Real Output: Remotion Video Session
Here's what a 3-hour Remotion implementation session produced:
Core Insight -- CSS transitions turned out lighter and more controllable than spring animations inside Remotion's pipeline
Why -- framer-motion conflicted with Remotion's rendering pipeline
Gotchas -- Misunderstood the relationship betweenfps: 30anddurationInFrames-- a 2-second animation rendered as 4 seconds
Details -- [premium]
Learnings -- Don't mix external animation libraries with Remotion; stay within its native API surface
Five blocks, one to two lines each. Three hours of trial and error compressed to this granularity.
Three Design Principles in Hindsight
Building this skill surfaced a few things I think are generally applicable:
Skills are behavioral instructions, not code. Ambiguous instructions produce inconsistent AI output. Defining the 5-block structure explicitly was about guaranteeing output reproducibility. The more precisely you specify what you want, the less the AI drifts.
Security must be safe by default. Draft-first posting, automatic secret stripping, relative path conversion. Any design that relies on users remembering to be careful will eventually cause an incident. The defense has to be structural, not behavioral.
Multi-tool support should focus on commonalities, not differences. Building per-tool parsers was tempting. But once I recognized that every session shares the same essential structure -- intent, attempts, failures, takeaways -- the unified approach became both simpler and more maintainable.
There's still plenty to improve. Security filter accuracy and block structure customization are the obvious next targets. But the core problem -- sessions disappearing into the void -- is solved.
Try it at lovai.app.
Related Articles
- Stripe Closed My Connect Account -- Here's What Actually Fixed It in 24 Hours -- The payment infrastructure behind Lovai's creator payouts, and what happens when Stripe pulls the rug
- OpenRouter Structured Output Broke Before Translation Quality Did -- Another case where the "boring" engineering (output structure, fallback layers) matters more than the AI itself
Top comments (0)