I recently launched ScreenUI, an open-source UI component library + CLI for React / Next.js.
The early response was encouraging:
- 100+ downloads in a few days
- Good engagement on Dev.to
- High search impressions
But something felt off.
When I asked ChatGPT questions like
“How do I use the ScreenUI Button component?”
the answers were often incomplete or just wrong.
That was surprising - the docs looked fine to humans.
What I realized
My documentation works well for developers reading it directly.
But AI tools don’t read docs the same way humans do.
Humans:
- Scroll
- Infer context
- Connect examples mentally
AI:
- Looks for explicit structure
- Needs clear signals
- Breaks when important details are buried in prose
High impressions and good UX don’t automatically mean AI understands your library.
Why this matters
More developers are learning tools by asking AI first, not by opening docs.
If AI can’t accurately explain:
- Component usage
- Props
- Installation
Your library can look unreliable - even if it’s not.
That gap between human-readable and AI-readable docs is becoming very real.
Open question
For those building libraries or design systems:
How are you making your documentation work better with AI?
Different structure? Metadata? APIs? Something else entirely?
I’m curious to hear different approaches.
website
Top comments (0)