Docs observability, or measuring docs inside a product-docs system

Jan. 8, 2024

As technical writers we want to know if the docs we’re writing are accomplishing their goals. In other words, we want to know how good are docs relative to the business goals they’re aiming to support or improve. Are docs serving their purpose? Which of the three budgets are docs supporting? When tech bubbles burst, roles usually seen as cost centers, such as tech writing, are ripe for layoffs, no matter how staunchly we defend them. That’s why we continue mulling over the question of value and how we measure it.

To answer those questions we writers often collect indirect metrics, such as page views, global reduction in support burden, manual links to the documentation in tickets, and so on. Few are success metrics; more often than not, we’re using “it could have been worse” metrics that highlight the lifesaver or palliative role that docs have. On the other hand, qualitative user research and surveys are better methods of gauging user satisfaction in regards to docs, but they’re expensive to roll out, hard to analyze, and they suffer from sampling limitations (read: it’s too few people).

Even though the ones I listed are all valid ways of collecting data to argue for the value of what we do as tech writers, they might feel insufficient or too disconnected from KPIs, OKRs, and cash. Why can’t we come up with better measurements of the success of docs or their impact? Isn’t there something we can place on a dashboard to check the net effect on the value a company is generating? I think so. I think there might be a way of directly connecting docs to product success metrics, one that requires following users through both product and docs.

Tracking the journey of users through a product-docs system

The user is struggling with an API call. After producing several error responses, they follow the link to the docs that’s embedded in the response payload. The doc about the error opens in the browser. Realizing their mistake, the user goes back to their code, modifies a parameter and tries again. The call now throws an OK – success! A few hours later, the technical writer who created the document looks at a dashboard that shows how many users successfully sent an API call after reading the doc. The writer smiles: It’s docs observability.

What you’ve just read is an example of a different paradigm, one that assumes that the success of docs cannot be analyzed separately from the product they document. As I happen to document observability (o11y) solutions, I’ve chosen to call it “documentation observability”, or do11y. I came up with this term during a conversation with Bob Watson on a post that suggested removing the docs to measure the impact of their disappearance.  That post left me wondering: if removing docs is perceived as harmful, they’re indeed part of the product after all. Why not instrument the docs and observe their behavior as if they were part of a unified UX?

Software observability, or o11y, has many different definitions, but they all share an emphasis on collecting data about the internal states of software components to troubleshoot issues with little prior knowledge. As opposed to monitoring, observability deals with unknown unknowns that arise from the sheer complexity of running distributed systems on increasingly complex stacks. In the context of software documentation, something like documentation observability might help us answer questions such as the following:

Even though all the previous questions could be answered through user research, or less directly through web analytics or A/B testing, a more efficient, less opinionated approach could be to track the journey of the users through the product and the documentation itself, using techniques such as real user monitoring (RUM) and correlating the resulting data with back-end logs, traces, and metrics. Observability solutions already do this for complex journeys such as adding items to the shopping cart of e-commerce sites, for example, so why not tracking docs usage?

Adding observability to documentation

While software has a more predictable, easier to test scope — think of inputs and outputs — docs often lack those features. Knowing what the outcome of reading a doc should be is harder, but there’s definitely something to say about docs that don’t produce any effect at all (negative results are useful, too). Applying observability to docs, in this sense, is not just a technical challenge, but also a content strategy and product design quest. If we’re already treating docs as code, why not pushing this further and treat the docs as if they were a technical feature of the product?

Generally speaking, documentation is observable when:

The assumption is that docs, especially the ones that document rather complicated software, are consumed while using the product, at least during the initial onboarding phases or when new features are rolled out. Each doc doesn’t necessarily need to have an explicit goal (that would allow us to monitor their success, rather than observe), but they do need to be linked to the product by means of deep linking, session tracking, tracking codes, or similar mechanisms. One must be able to track user activity while jumping back and forth between docs and screens.

The example from the previous section shows one of the possible implementation mechanisms, but there could be many more. Consider, for example, documentation served through a React website, where each section could be a component that triggers events that an observability back-end could collect, like time spent reading the content, user interactions (did the user copy the code snippet?), and so on. The session could then be correlated with the user ID, so that we’d know about the journey of the user through the product itself and find out what came next. Multiply this by thousands of users to get a measure of how docs really support the product.

Notice that docs observability is not that different from the approach UX writers would follow when tracking the impact of their copy on product usage, with an important difference: unless docs are also UI text or part of the main product UI itself, you’re going to measure activity in a separate UX, one that focuses on docs consumption. Do11y, in this sense, treats docs as part of a distributed system, as a product that can evolve independently from others (often because it’s serving more than one) and that won’t fade away once its success has been verified.

What do you need to get started

Getting docs observability is no easy feat. I haven’t done it yet, though here I am talking about it. Outside of the biggest players, like Meta or Google, I doubt any company has tried this before (but please, send me examples if you know of any). This is because the concept of treating docs as a feature of a software product rather than an ancillary element hasn’t gotten enough traction yet in the industry (similar to what happens with API first design). It’s through posts and conversations that we might get started.

Here’s something you can do today: If you’ve recently been discussing docs metrics at your workplace, consider introducing the concept of docs observability to the team and to stakeholders, where observability means treating docs planning, design, and infrastructure the same way as you would treat feature planning, design, and infra for the software you’re documenting. This Copernican shift is easier to execute when you haven’t docs; nonetheless, it can still work as a gravity well for next actions even when you have docs debt.

The technical side is perhaps less relevant or worrying, though docs observability would still require instrumentation and a back end able to ingest and process that data. Check if your company already uses an observability solution: you might want to request access to it and involve engineers in a proof of concept or some fun hackaton project involving a handful of docs and a specific feature, for example. At the same time, you’ll want to check if your docs tooling is compatible with the instrumentation.

In the end, docs observability, like software observability, is first and foremost a frame of mind that informs the way you’ll approach the design of content and connected systems, and how you’ll measure success.

Acknowledgements

I’d like to thank Bob Watson and Severin Neumann for their valuable feedback on the final draft of this blog post and for their encouragement.