A reflection on 3 years as a Head of Developer Content.
The Viam docs cover a complex and large developer platform, SDKs in multiple languages and APIs that move at the speed of, well, a startup. And for that the docs received several awards.
But where did it start? This Web Archive capture shows what they looked like three years ago, before I started.
My team and I got to work, and a year after that humble starting point, the docs had transformed into something worth reading. We had won multiple awards at the Devportal awards, including the Best Overall SME Devportal award: we were on the right track.
Improving the docs website
A good docs portal isn't just about the technical writing: it is, after all, a website that users of your product have to interact with most days. With that in mind, we also made it a point to add usability features.
- dynamic elements that allowed for filtering changelog entries, tutorials, and modules (the latter long before the product): A combination of Typesense, an open source search engine, and Algolia's InstantSearch.js, an open source UI library allowed us to add these dynamic components to our otherwise static site.
- In text glossary items that show more info on hover. Hugo is a wonderful framework for creating docs sites if you want to add customization but don't want to use front-end frameworks to build components or the entire thing. In this case I was able to lean on and learn from the work that the writers at K8s had built, who also use Hugo.
- Using GIFs to show features & hardware in action: I decided early on to lean on imagery to bring the product to live, because I feel hardware actuation is best conveyed that way. And with Hugo I was able to add shortcodes to allow us to serve GIFs as more bandwidth-friendly videos. Generally I used Hugo a lot to enforce SEO rules.
Automation with GitHub Actions
Keeping on top of the documentation surface area for a complex platform, whilst also maintaining a user-facing web application, is no small feat for a small team. We did what we could to make ends meet. Namely: automation. In our case with GitHub Actions:
- Style guide checking with Vale - a linter that is focused on prose. After creating regexes that cover the rules from a style guide. Luckily Vale is a long time favourite of mine which I first found at MongoDB and used to write the rules they still use 5 years later - and which were later were released by someone else.
- Code sample testing - I wrote an entire testing suite by putting code samples in files with project setup and teardown and markup to delineate the code samples. This allowed me to then use GitHub Actions and some bash scripts to automatically run the tests and to generate the code samples from them based on the markup delineations. While the initial cost is a bit higher - the end result was e2e tests that caught API changes and outages and provided confidence in the code samples.
- Tooling to find dead links and surface links that needed forwarding. Htmltest is a great tool to check for broken links but checking for links that have gone away and should be forwarded was a bit harder. I finally solved this with a Netlify integration (I believe it was No more 404s). Netlify also helped me move from HTML level forwarding to HTTP level forwarding.
Generative AI
We also did a lot with generative AI. I wrote a POC for a tool that would update docs based on information from a PR, and then found a vendor that did this even better. Reviewing AI-generated content still requires enormous diligence - but something to look out for in the future. AI chat is far more advanced, and I was somewhat surprised how useful it was - if and only if used with diligence in fact checking.
I find myself wishing that people viewed generative AI output as a different kind of search, because I think this view of it retains the need for critical review better. I think we as an industry have some work to do in improving user literacy around how generative AI works, as it’s too easy for users to expect generative AI to have capabilities it doesn’t have (and to entrust AI chats with passwords and secrets).
The learnings from our AI usage can easily span multiple blog posts. So I’ll leave it at saying that if you’re looking for prebuilt embeddable AI chat tooling, I’d recommend Inkeep.
A single source of truth: SDK docs and "main" docs
Lastly, we leveraged automatic docs generation to reduce our workload. For example, we automatically generated main platform documentation from SDK docs. When I realized people had been copying code samples from SDK docs to the main docs (and vice versa), I pushed for a single source of truth. Code samples that show how to use individual API functions should live with the SDK. The SDK docs are generated from the docstrings in the SDK code, so this also meant that the docs were colocated with the code and therefore more likely to be up to date. So we built a system to ingest the SDK docs and generate parts of the main docs automatically. As the system ran, we were able to see and review new changes, and make edits to the SDK docs if needed. The result was better consistency and less work.
Conclusion
While the docs weren't all I worked on, they are the work I took most pride in. And 3 years later, as I hand over the docs, there is still work to be done - but I am happy with what we accomplished in the time.
Take a look for yourself: https://docs.viam.com/




Top comments (0)